Jan 31 03:47:55 crc systemd[1]: Starting Kubernetes Kubelet... Jan 31 03:47:55 crc restorecon[4588]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:55 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 03:47:56 crc restorecon[4588]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 03:47:56 crc restorecon[4588]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 31 03:47:56 crc kubenswrapper[4667]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 03:47:56 crc kubenswrapper[4667]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 31 03:47:56 crc kubenswrapper[4667]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 03:47:56 crc kubenswrapper[4667]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 03:47:56 crc kubenswrapper[4667]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 31 03:47:56 crc kubenswrapper[4667]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 03:47:56 crc kubenswrapper[4667]: I0131 03:47:56.998999 4667 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.010932 4667 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.010965 4667 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.010973 4667 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.010980 4667 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.010987 4667 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.010992 4667 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.010997 4667 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011004 4667 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011010 4667 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011016 4667 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011021 4667 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011028 4667 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011036 4667 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011043 4667 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011050 4667 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011056 4667 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011063 4667 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011070 4667 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011076 4667 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011083 4667 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011088 4667 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011093 4667 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011100 4667 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011108 4667 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011115 4667 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011122 4667 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011128 4667 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011133 4667 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011139 4667 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011145 4667 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011151 4667 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011156 4667 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011162 4667 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011167 4667 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011173 4667 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011179 4667 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011186 4667 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011193 4667 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011201 4667 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011209 4667 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011214 4667 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011221 4667 feature_gate.go:330] unrecognized feature gate: Example Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011226 4667 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011232 4667 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011238 4667 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011243 4667 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011249 4667 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011254 4667 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011259 4667 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011265 4667 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011271 4667 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011279 4667 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011286 4667 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011292 4667 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011299 4667 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011306 4667 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011312 4667 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011318 4667 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011324 4667 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011330 4667 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011335 4667 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011340 4667 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011345 4667 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011350 4667 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011355 4667 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011360 4667 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011365 4667 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011370 4667 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011376 4667 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011380 4667 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.011385 4667 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011502 4667 flags.go:64] FLAG: --address="0.0.0.0" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011514 4667 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011526 4667 flags.go:64] FLAG: --anonymous-auth="true" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011536 4667 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011545 4667 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011551 4667 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011563 4667 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011571 4667 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011578 4667 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011585 4667 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011593 4667 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011600 4667 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011606 4667 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011613 4667 flags.go:64] FLAG: --cgroup-root="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011620 4667 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011626 4667 flags.go:64] FLAG: --client-ca-file="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011632 4667 flags.go:64] FLAG: --cloud-config="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011638 4667 flags.go:64] FLAG: --cloud-provider="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011644 4667 flags.go:64] FLAG: --cluster-dns="[]" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011651 4667 flags.go:64] FLAG: --cluster-domain="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011657 4667 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011663 4667 flags.go:64] FLAG: --config-dir="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011669 4667 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011676 4667 flags.go:64] FLAG: --container-log-max-files="5" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011684 4667 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011690 4667 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011697 4667 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011703 4667 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011709 4667 flags.go:64] FLAG: --contention-profiling="false" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011715 4667 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011721 4667 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011728 4667 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011734 4667 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011741 4667 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011747 4667 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011753 4667 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011759 4667 flags.go:64] FLAG: --enable-load-reader="false" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011766 4667 flags.go:64] FLAG: --enable-server="true" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011773 4667 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011782 4667 flags.go:64] FLAG: --event-burst="100" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011788 4667 flags.go:64] FLAG: --event-qps="50" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011794 4667 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011805 4667 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011811 4667 flags.go:64] FLAG: --eviction-hard="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011819 4667 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011824 4667 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011831 4667 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011868 4667 flags.go:64] FLAG: --eviction-soft="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011876 4667 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011884 4667 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011891 4667 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011897 4667 flags.go:64] FLAG: --experimental-mounter-path="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011904 4667 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011910 4667 flags.go:64] FLAG: --fail-swap-on="true" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011916 4667 flags.go:64] FLAG: --feature-gates="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011924 4667 flags.go:64] FLAG: --file-check-frequency="20s" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011930 4667 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011937 4667 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011943 4667 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011950 4667 flags.go:64] FLAG: --healthz-port="10248" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011956 4667 flags.go:64] FLAG: --help="false" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011962 4667 flags.go:64] FLAG: --hostname-override="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011968 4667 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011975 4667 flags.go:64] FLAG: --http-check-frequency="20s" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011981 4667 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011987 4667 flags.go:64] FLAG: --image-credential-provider-config="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011993 4667 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.011999 4667 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012005 4667 flags.go:64] FLAG: --image-service-endpoint="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012011 4667 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012017 4667 flags.go:64] FLAG: --kube-api-burst="100" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012023 4667 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012029 4667 flags.go:64] FLAG: --kube-api-qps="50" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012035 4667 flags.go:64] FLAG: --kube-reserved="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012044 4667 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012050 4667 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012056 4667 flags.go:64] FLAG: --kubelet-cgroups="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012062 4667 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012069 4667 flags.go:64] FLAG: --lock-file="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012075 4667 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012082 4667 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012088 4667 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012098 4667 flags.go:64] FLAG: --log-json-split-stream="false" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012104 4667 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012110 4667 flags.go:64] FLAG: --log-text-split-stream="false" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012116 4667 flags.go:64] FLAG: --logging-format="text" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012122 4667 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012129 4667 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012135 4667 flags.go:64] FLAG: --manifest-url="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012142 4667 flags.go:64] FLAG: --manifest-url-header="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012150 4667 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012157 4667 flags.go:64] FLAG: --max-open-files="1000000" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012165 4667 flags.go:64] FLAG: --max-pods="110" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012171 4667 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012178 4667 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012184 4667 flags.go:64] FLAG: --memory-manager-policy="None" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012190 4667 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012196 4667 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012202 4667 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012209 4667 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012224 4667 flags.go:64] FLAG: --node-status-max-images="50" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012230 4667 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012237 4667 flags.go:64] FLAG: --oom-score-adj="-999" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012244 4667 flags.go:64] FLAG: --pod-cidr="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012251 4667 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012263 4667 flags.go:64] FLAG: --pod-manifest-path="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012270 4667 flags.go:64] FLAG: --pod-max-pids="-1" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012277 4667 flags.go:64] FLAG: --pods-per-core="0" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012283 4667 flags.go:64] FLAG: --port="10250" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012290 4667 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012298 4667 flags.go:64] FLAG: --provider-id="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012304 4667 flags.go:64] FLAG: --qos-reserved="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012310 4667 flags.go:64] FLAG: --read-only-port="10255" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012316 4667 flags.go:64] FLAG: --register-node="true" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012322 4667 flags.go:64] FLAG: --register-schedulable="true" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012328 4667 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012339 4667 flags.go:64] FLAG: --registry-burst="10" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012345 4667 flags.go:64] FLAG: --registry-qps="5" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012350 4667 flags.go:64] FLAG: --reserved-cpus="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012357 4667 flags.go:64] FLAG: --reserved-memory="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012364 4667 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012370 4667 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012377 4667 flags.go:64] FLAG: --rotate-certificates="false" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012384 4667 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012389 4667 flags.go:64] FLAG: --runonce="false" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012396 4667 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012402 4667 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012408 4667 flags.go:64] FLAG: --seccomp-default="false" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012415 4667 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012421 4667 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012427 4667 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012433 4667 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012439 4667 flags.go:64] FLAG: --storage-driver-password="root" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012445 4667 flags.go:64] FLAG: --storage-driver-secure="false" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012451 4667 flags.go:64] FLAG: --storage-driver-table="stats" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012457 4667 flags.go:64] FLAG: --storage-driver-user="root" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012463 4667 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012470 4667 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012476 4667 flags.go:64] FLAG: --system-cgroups="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012482 4667 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012492 4667 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012498 4667 flags.go:64] FLAG: --tls-cert-file="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012504 4667 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012513 4667 flags.go:64] FLAG: --tls-min-version="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012519 4667 flags.go:64] FLAG: --tls-private-key-file="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012525 4667 flags.go:64] FLAG: --topology-manager-policy="none" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012532 4667 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012538 4667 flags.go:64] FLAG: --topology-manager-scope="container" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012544 4667 flags.go:64] FLAG: --v="2" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012552 4667 flags.go:64] FLAG: --version="false" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012560 4667 flags.go:64] FLAG: --vmodule="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012568 4667 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.012574 4667 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.012756 4667 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.012763 4667 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.012770 4667 feature_gate.go:330] unrecognized feature gate: Example Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.012776 4667 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.012782 4667 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.012787 4667 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.012793 4667 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.012798 4667 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.012803 4667 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.012808 4667 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.012814 4667 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.012819 4667 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.012824 4667 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.012830 4667 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.012857 4667 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.012864 4667 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.012871 4667 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.012880 4667 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.012896 4667 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.012902 4667 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.012907 4667 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.012913 4667 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.012920 4667 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.012927 4667 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.012933 4667 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.012938 4667 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.012944 4667 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.012949 4667 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.012954 4667 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.012961 4667 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.012967 4667 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.012972 4667 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.012978 4667 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.012983 4667 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.012989 4667 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.012994 4667 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.012999 4667 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.013004 4667 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.013009 4667 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.013015 4667 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.013022 4667 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.013028 4667 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.013034 4667 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.013040 4667 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.013045 4667 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.013051 4667 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.013056 4667 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.013061 4667 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.013066 4667 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.013071 4667 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.013079 4667 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.013084 4667 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.013090 4667 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.013095 4667 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.013100 4667 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.013105 4667 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.013111 4667 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.013116 4667 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.013121 4667 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.013126 4667 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.013132 4667 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.013137 4667 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.013142 4667 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.013147 4667 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.013152 4667 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.013161 4667 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.013167 4667 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.013172 4667 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.013177 4667 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.013185 4667 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.013192 4667 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.014516 4667 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.031306 4667 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.031434 4667 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.031610 4667 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.031684 4667 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.031750 4667 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.031815 4667 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.031925 4667 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.032014 4667 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.032083 4667 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.032153 4667 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.032222 4667 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.032287 4667 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.032352 4667 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.032424 4667 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.032489 4667 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.032552 4667 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.032624 4667 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.032688 4667 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.032753 4667 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.032817 4667 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.032920 4667 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.032988 4667 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.033052 4667 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.033116 4667 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.033179 4667 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.033249 4667 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.033318 4667 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.033402 4667 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.033479 4667 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.033545 4667 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.033621 4667 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.033685 4667 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.033750 4667 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.034123 4667 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.034303 4667 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.034373 4667 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.034439 4667 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.034503 4667 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.034566 4667 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.034641 4667 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.035261 4667 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.035319 4667 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.035330 4667 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.035339 4667 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.035347 4667 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.035356 4667 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.035364 4667 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.035372 4667 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.035380 4667 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.035388 4667 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.035396 4667 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.035404 4667 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.035414 4667 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.035422 4667 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.035430 4667 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.035438 4667 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.035446 4667 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.035456 4667 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.035464 4667 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.035473 4667 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.035484 4667 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.035495 4667 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.035504 4667 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.035512 4667 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.035521 4667 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.035529 4667 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.035537 4667 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.035545 4667 feature_gate.go:330] unrecognized feature gate: Example Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.035553 4667 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.035561 4667 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.035569 4667 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.035576 4667 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.035584 4667 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.035598 4667 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.035945 4667 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.035963 4667 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.035971 4667 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.035981 4667 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.035990 4667 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.035997 4667 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036007 4667 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036042 4667 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036050 4667 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036076 4667 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036084 4667 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036092 4667 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036102 4667 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036111 4667 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036120 4667 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036129 4667 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036137 4667 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036145 4667 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036152 4667 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036161 4667 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036169 4667 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036177 4667 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036185 4667 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036193 4667 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036201 4667 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036209 4667 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036216 4667 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036224 4667 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036231 4667 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036239 4667 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036250 4667 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036260 4667 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036271 4667 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036280 4667 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036288 4667 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036298 4667 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036306 4667 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036319 4667 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036327 4667 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036334 4667 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036342 4667 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036350 4667 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036358 4667 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036365 4667 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036373 4667 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036381 4667 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036389 4667 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036398 4667 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036406 4667 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036413 4667 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036421 4667 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036432 4667 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036441 4667 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036449 4667 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036459 4667 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036468 4667 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036477 4667 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036486 4667 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036494 4667 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036502 4667 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036510 4667 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036518 4667 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036525 4667 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036533 4667 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036541 4667 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036548 4667 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036558 4667 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036569 4667 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036577 4667 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036586 4667 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.036595 4667 feature_gate.go:330] unrecognized feature gate: Example Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.036608 4667 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.036906 4667 server.go:940] "Client rotation is on, will bootstrap in background" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.042956 4667 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.043100 4667 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.044999 4667 server.go:997] "Starting client certificate rotation" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.045038 4667 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.045292 4667 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-09 16:54:00.962149449 +0000 UTC Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.045433 4667 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.076695 4667 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.079769 4667 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 31 03:47:57 crc kubenswrapper[4667]: E0131 03:47:57.080762 4667 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.111:6443: connect: connection refused" logger="UnhandledError" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.100886 4667 log.go:25] "Validated CRI v1 runtime API" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.141579 4667 log.go:25] "Validated CRI v1 image API" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.144554 4667 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.153651 4667 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-31-03-42-43-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.153692 4667 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.174871 4667 manager.go:217] Machine: {Timestamp:2026-01-31 03:47:57.172605148 +0000 UTC m=+0.688940487 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2800000 MemoryCapacity:25199480832 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:53d28e89-fb25-47fd-9db4-43074284604e BootID:1b790e77-6566-44ce-a51f-ed9234cccb89 Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:e2:c5:73 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:e2:c5:73 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:59:50:19 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:72:82:45 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:4f:64:1b Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:66:27:d6 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:6a:0b:4e:f8:fe:1f Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:46:e3:48:e7:db:94 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199480832 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.175330 4667 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.175616 4667 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.176206 4667 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.176660 4667 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.176744 4667 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.177380 4667 topology_manager.go:138] "Creating topology manager with none policy" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.177408 4667 container_manager_linux.go:303] "Creating device plugin manager" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.177980 4667 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.178050 4667 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.179073 4667 state_mem.go:36] "Initialized new in-memory state store" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.179251 4667 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.185350 4667 kubelet.go:418] "Attempting to sync node with API server" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.185403 4667 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.185517 4667 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.185544 4667 kubelet.go:324] "Adding apiserver pod source" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.185572 4667 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.190929 4667 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.192224 4667 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.192476 4667 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.111:6443: connect: connection refused Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.192663 4667 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.111:6443: connect: connection refused Jan 31 03:47:57 crc kubenswrapper[4667]: E0131 03:47:57.192895 4667 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.111:6443: connect: connection refused" logger="UnhandledError" Jan 31 03:47:57 crc kubenswrapper[4667]: E0131 03:47:57.193201 4667 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.111:6443: connect: connection refused" logger="UnhandledError" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.195548 4667 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.197242 4667 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.197291 4667 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.197309 4667 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.197325 4667 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.197350 4667 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.197368 4667 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.197384 4667 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.197408 4667 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.197426 4667 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.197444 4667 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.197465 4667 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.197480 4667 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.198586 4667 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.199376 4667 server.go:1280] "Started kubelet" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.199661 4667 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.199863 4667 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.200742 4667 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.111:6443: connect: connection refused Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.201143 4667 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 31 03:47:57 crc systemd[1]: Started Kubernetes Kubelet. Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.203067 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.203258 4667 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.204247 4667 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.204295 4667 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.204514 4667 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.204214 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 18:43:34.030662466 +0000 UTC Jan 31 03:47:57 crc kubenswrapper[4667]: E0131 03:47:57.206596 4667 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 03:47:57 crc kubenswrapper[4667]: E0131 03:47:57.206736 4667 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.111:6443: connect: connection refused" interval="200ms" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.207161 4667 factory.go:55] Registering systemd factory Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.207206 4667 factory.go:221] Registration of the systemd container factory successfully Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.207623 4667 factory.go:153] Registering CRI-O factory Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.207653 4667 factory.go:221] Registration of the crio container factory successfully Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.207754 4667 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.207786 4667 factory.go:103] Registering Raw factory Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.207812 4667 manager.go:1196] Started watching for new ooms in manager Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.208367 4667 server.go:460] "Adding debug handlers to kubelet server" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.208775 4667 manager.go:319] Starting recovery of all containers Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.210724 4667 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.111:6443: connect: connection refused Jan 31 03:47:57 crc kubenswrapper[4667]: E0131 03:47:57.210881 4667 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.111:6443: connect: connection refused" logger="UnhandledError" Jan 31 03:47:57 crc kubenswrapper[4667]: E0131 03:47:57.211319 4667 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.111:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188fb42b6dd799bc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 03:47:57.19933382 +0000 UTC m=+0.715669159,LastTimestamp:2026-01-31 03:47:57.19933382 +0000 UTC m=+0.715669159,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.232817 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.232938 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.232964 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.232990 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.234634 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.234692 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.234720 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.234742 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.234766 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.234790 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.234812 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.234833 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.234879 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.234932 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.234952 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.234973 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.235026 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.235047 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.235069 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.235088 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.235107 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.235125 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.235145 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.235168 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.235217 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.235240 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.235266 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.235290 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.235314 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.235335 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.235356 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.235374 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.235393 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.235443 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.235462 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.235480 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.235503 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.235522 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.235541 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.235562 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.235582 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.235603 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.235622 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.235645 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.235664 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.235693 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.235714 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.235735 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.235756 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.235778 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.235800 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.235821 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.235879 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.235905 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.235926 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.235947 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.235967 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.235987 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236041 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236062 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236085 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236105 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236126 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236147 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236166 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236184 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236200 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236222 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236242 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236261 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236279 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236299 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236316 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236337 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236358 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236376 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236394 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236414 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236433 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236450 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236468 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236486 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236506 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236524 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236541 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236559 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236577 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236595 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236613 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236633 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236652 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236671 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236688 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236706 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236723 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236743 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236762 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236780 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236797 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236816 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236834 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236881 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236901 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236919 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236946 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236973 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.236995 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.237018 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241025 4667 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241098 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241132 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241155 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241176 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241194 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241210 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241227 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241243 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241259 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241277 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241293 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241312 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241332 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241351 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241367 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241384 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241403 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241426 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241441 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241475 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241490 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241505 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241521 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241541 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241558 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241573 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241591 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241611 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241631 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241651 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241670 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241690 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241708 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241726 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241743 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241758 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241774 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241789 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241803 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241819 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241835 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241912 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241928 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241946 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241970 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.241989 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242006 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242022 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242040 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242056 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242073 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242089 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242106 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242122 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242139 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242157 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242175 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242195 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242214 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242284 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242302 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242317 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242333 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242348 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242364 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242379 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242396 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242411 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242426 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242444 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242467 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242485 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242500 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242514 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242529 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242572 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242589 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242605 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242652 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242666 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242681 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242695 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242707 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242721 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242735 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242748 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242792 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242883 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242897 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242912 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242927 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242942 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242958 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242971 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.242985 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.243054 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.243072 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.243086 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.243098 4667 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.243112 4667 reconstruct.go:97] "Volume reconstruction finished" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.243122 4667 reconciler.go:26] "Reconciler: start to sync state" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.258206 4667 manager.go:324] Recovery completed Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.276688 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.277680 4667 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.278774 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.278835 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.278896 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.279905 4667 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.280031 4667 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.279948 4667 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.280072 4667 state_mem.go:36] "Initialized new in-memory state store" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.280121 4667 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.280444 4667 kubelet.go:2335] "Starting kubelet main sync loop" Jan 31 03:47:57 crc kubenswrapper[4667]: E0131 03:47:57.280749 4667 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.281201 4667 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.111:6443: connect: connection refused Jan 31 03:47:57 crc kubenswrapper[4667]: E0131 03:47:57.281275 4667 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.111:6443: connect: connection refused" logger="UnhandledError" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.300329 4667 policy_none.go:49] "None policy: Start" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.302901 4667 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.302970 4667 state_mem.go:35] "Initializing new in-memory state store" Jan 31 03:47:57 crc kubenswrapper[4667]: E0131 03:47:57.307454 4667 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.363419 4667 manager.go:334] "Starting Device Plugin manager" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.363474 4667 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.363488 4667 server.go:79] "Starting device plugin registration server" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.364342 4667 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.364360 4667 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.364590 4667 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.364669 4667 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.364678 4667 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.381609 4667 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.381771 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.382986 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.383018 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.383027 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.383184 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.383409 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.383464 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.387323 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.387399 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.387476 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.387494 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.387427 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.387618 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.387724 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.387915 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.387983 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.388982 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.389015 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.389030 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.389035 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.389080 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.389053 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.389310 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.389418 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.389472 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.389942 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.389970 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.389983 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.390184 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.390621 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.390655 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.390672 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.390703 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.390728 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.391184 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.391229 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.391248 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.391441 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.391472 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.391484 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.391507 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.391513 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.393194 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.393297 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.393386 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:57 crc kubenswrapper[4667]: E0131 03:47:57.407771 4667 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.111:6443: connect: connection refused" interval="400ms" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.445723 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.445930 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.445959 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.445984 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.446169 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.446197 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.446249 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.446280 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.446302 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.446329 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.446351 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.446421 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.446466 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.446490 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.446506 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.464611 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.466031 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.466247 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.466383 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.466552 4667 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 03:47:57 crc kubenswrapper[4667]: E0131 03:47:57.467546 4667 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.111:6443: connect: connection refused" node="crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.548136 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.548610 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.548640 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.548656 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.548678 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.548699 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.548719 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.548739 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.548790 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.548810 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.548830 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.548867 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.548886 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.548902 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.548917 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.548358 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.549342 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.549372 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.549393 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.549413 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.549432 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.549473 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.549510 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.549539 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.549564 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.549590 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.549618 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.549643 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.549668 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.549691 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.668372 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.669733 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.669765 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.669773 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.669794 4667 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 03:47:57 crc kubenswrapper[4667]: E0131 03:47:57.670203 4667 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.111:6443: connect: connection refused" node="crc" Jan 31 03:47:57 crc kubenswrapper[4667]: E0131 03:47:57.705630 4667 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.714340 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.728895 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.747776 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.763057 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: I0131 03:47:57.769474 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.774466 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-a997ec11463fbd85dd2842e091d5015a1c4b321d709ab1672837158ea63f9e5d WatchSource:0}: Error finding container a997ec11463fbd85dd2842e091d5015a1c4b321d709ab1672837158ea63f9e5d: Status 404 returned error can't find the container with id a997ec11463fbd85dd2842e091d5015a1c4b321d709ab1672837158ea63f9e5d Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.775973 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-fc4ff9f54b6031703b4dc0eb79a934703d8f3a9ae84497606fd6f08c03911db5 WatchSource:0}: Error finding container fc4ff9f54b6031703b4dc0eb79a934703d8f3a9ae84497606fd6f08c03911db5: Status 404 returned error can't find the container with id fc4ff9f54b6031703b4dc0eb79a934703d8f3a9ae84497606fd6f08c03911db5 Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.786464 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-6f16bcd21c1df67ca53538ea0176dd85db52c892fbead19d4c6994f9f9930151 WatchSource:0}: Error finding container 6f16bcd21c1df67ca53538ea0176dd85db52c892fbead19d4c6994f9f9930151: Status 404 returned error can't find the container with id 6f16bcd21c1df67ca53538ea0176dd85db52c892fbead19d4c6994f9f9930151 Jan 31 03:47:57 crc kubenswrapper[4667]: W0131 03:47:57.801580 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-3c06a64d043f876fbfb5a3cf6e80583e3d4c512e1edfe972558cd5116ba0d011 WatchSource:0}: Error finding container 3c06a64d043f876fbfb5a3cf6e80583e3d4c512e1edfe972558cd5116ba0d011: Status 404 returned error can't find the container with id 3c06a64d043f876fbfb5a3cf6e80583e3d4c512e1edfe972558cd5116ba0d011 Jan 31 03:47:57 crc kubenswrapper[4667]: E0131 03:47:57.808715 4667 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.111:6443: connect: connection refused" interval="800ms" Jan 31 03:47:58 crc kubenswrapper[4667]: I0131 03:47:58.071048 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:47:58 crc kubenswrapper[4667]: I0131 03:47:58.072390 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:58 crc kubenswrapper[4667]: I0131 03:47:58.072428 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:58 crc kubenswrapper[4667]: I0131 03:47:58.072439 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:58 crc kubenswrapper[4667]: I0131 03:47:58.072463 4667 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 03:47:58 crc kubenswrapper[4667]: E0131 03:47:58.072871 4667 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.111:6443: connect: connection refused" node="crc" Jan 31 03:47:58 crc kubenswrapper[4667]: I0131 03:47:58.201849 4667 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.111:6443: connect: connection refused Jan 31 03:47:58 crc kubenswrapper[4667]: I0131 03:47:58.205996 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 10:42:18.403261027 +0000 UTC Jan 31 03:47:58 crc kubenswrapper[4667]: I0131 03:47:58.285260 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6f16bcd21c1df67ca53538ea0176dd85db52c892fbead19d4c6994f9f9930151"} Jan 31 03:47:58 crc kubenswrapper[4667]: I0131 03:47:58.286560 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a997ec11463fbd85dd2842e091d5015a1c4b321d709ab1672837158ea63f9e5d"} Jan 31 03:47:58 crc kubenswrapper[4667]: I0131 03:47:58.288564 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"fc4ff9f54b6031703b4dc0eb79a934703d8f3a9ae84497606fd6f08c03911db5"} Jan 31 03:47:58 crc kubenswrapper[4667]: I0131 03:47:58.289813 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4cd2f1f1c40e35d4b36f5a85ef31d285052deb80dfc9ebca2b5ed0e40a9cac89"} Jan 31 03:47:58 crc kubenswrapper[4667]: I0131 03:47:58.290923 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3c06a64d043f876fbfb5a3cf6e80583e3d4c512e1edfe972558cd5116ba0d011"} Jan 31 03:47:58 crc kubenswrapper[4667]: W0131 03:47:58.314769 4667 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.111:6443: connect: connection refused Jan 31 03:47:58 crc kubenswrapper[4667]: E0131 03:47:58.314893 4667 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.111:6443: connect: connection refused" logger="UnhandledError" Jan 31 03:47:58 crc kubenswrapper[4667]: W0131 03:47:58.493662 4667 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.111:6443: connect: connection refused Jan 31 03:47:58 crc kubenswrapper[4667]: E0131 03:47:58.493775 4667 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.111:6443: connect: connection refused" logger="UnhandledError" Jan 31 03:47:58 crc kubenswrapper[4667]: E0131 03:47:58.609695 4667 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.111:6443: connect: connection refused" interval="1.6s" Jan 31 03:47:58 crc kubenswrapper[4667]: W0131 03:47:58.641781 4667 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.111:6443: connect: connection refused Jan 31 03:47:58 crc kubenswrapper[4667]: E0131 03:47:58.641903 4667 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.111:6443: connect: connection refused" logger="UnhandledError" Jan 31 03:47:58 crc kubenswrapper[4667]: W0131 03:47:58.730698 4667 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.111:6443: connect: connection refused Jan 31 03:47:58 crc kubenswrapper[4667]: E0131 03:47:58.730868 4667 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.111:6443: connect: connection refused" logger="UnhandledError" Jan 31 03:47:58 crc kubenswrapper[4667]: I0131 03:47:58.873325 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:47:58 crc kubenswrapper[4667]: I0131 03:47:58.874777 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:58 crc kubenswrapper[4667]: I0131 03:47:58.874816 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:58 crc kubenswrapper[4667]: I0131 03:47:58.874827 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:58 crc kubenswrapper[4667]: I0131 03:47:58.874866 4667 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 03:47:58 crc kubenswrapper[4667]: E0131 03:47:58.875291 4667 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.111:6443: connect: connection refused" node="crc" Jan 31 03:47:59 crc kubenswrapper[4667]: I0131 03:47:59.202068 4667 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.111:6443: connect: connection refused Jan 31 03:47:59 crc kubenswrapper[4667]: I0131 03:47:59.206167 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 09:07:31.404864367 +0000 UTC Jan 31 03:47:59 crc kubenswrapper[4667]: I0131 03:47:59.239578 4667 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 03:47:59 crc kubenswrapper[4667]: E0131 03:47:59.240608 4667 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.111:6443: connect: connection refused" logger="UnhandledError" Jan 31 03:47:59 crc kubenswrapper[4667]: I0131 03:47:59.295328 4667 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="901a09c39328d4cd2c2abdccd1928b5f1554d953b1271349cbdf179f93eaa4be" exitCode=0 Jan 31 03:47:59 crc kubenswrapper[4667]: I0131 03:47:59.295417 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"901a09c39328d4cd2c2abdccd1928b5f1554d953b1271349cbdf179f93eaa4be"} Jan 31 03:47:59 crc kubenswrapper[4667]: I0131 03:47:59.295553 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:47:59 crc kubenswrapper[4667]: I0131 03:47:59.297129 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:59 crc kubenswrapper[4667]: I0131 03:47:59.297212 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:59 crc kubenswrapper[4667]: I0131 03:47:59.297225 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:59 crc kubenswrapper[4667]: I0131 03:47:59.301269 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:47:59 crc kubenswrapper[4667]: I0131 03:47:59.301267 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a46df3e9a1466ef303cf6f7c703ee28b993ea1ad08bdc870c4298be0ba0804d5"} Jan 31 03:47:59 crc kubenswrapper[4667]: I0131 03:47:59.301335 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fad2057c1b38b9a7628137d033413b768ea2ff18e1ece27c3db4f9279009ad9e"} Jan 31 03:47:59 crc kubenswrapper[4667]: I0131 03:47:59.301397 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"572c8933d715b77d472cb5f4c1e3c78d3a5d9dd6857a061f4db5292274041429"} Jan 31 03:47:59 crc kubenswrapper[4667]: I0131 03:47:59.301422 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a93540db06524b42380aa14ebbb64ece6e98cf8104ccc5930d58ae980e41d3fa"} Jan 31 03:47:59 crc kubenswrapper[4667]: I0131 03:47:59.302052 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:59 crc kubenswrapper[4667]: I0131 03:47:59.302120 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:59 crc kubenswrapper[4667]: I0131 03:47:59.302132 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:59 crc kubenswrapper[4667]: I0131 03:47:59.304914 4667 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c" exitCode=0 Jan 31 03:47:59 crc kubenswrapper[4667]: I0131 03:47:59.304986 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c"} Jan 31 03:47:59 crc kubenswrapper[4667]: I0131 03:47:59.305094 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:47:59 crc kubenswrapper[4667]: I0131 03:47:59.306210 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:59 crc kubenswrapper[4667]: I0131 03:47:59.306238 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:59 crc kubenswrapper[4667]: I0131 03:47:59.306249 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:59 crc kubenswrapper[4667]: I0131 03:47:59.308365 4667 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3" exitCode=0 Jan 31 03:47:59 crc kubenswrapper[4667]: I0131 03:47:59.308417 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3"} Jan 31 03:47:59 crc kubenswrapper[4667]: I0131 03:47:59.308490 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:47:59 crc kubenswrapper[4667]: I0131 03:47:59.309636 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:59 crc kubenswrapper[4667]: I0131 03:47:59.309664 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:59 crc kubenswrapper[4667]: I0131 03:47:59.309674 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:59 crc kubenswrapper[4667]: I0131 03:47:59.311271 4667 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="74f97be14eb7d701db876925386940db52004c3cd69931268f857f10ce702c39" exitCode=0 Jan 31 03:47:59 crc kubenswrapper[4667]: I0131 03:47:59.311347 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"74f97be14eb7d701db876925386940db52004c3cd69931268f857f10ce702c39"} Jan 31 03:47:59 crc kubenswrapper[4667]: I0131 03:47:59.311522 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:47:59 crc kubenswrapper[4667]: I0131 03:47:59.314271 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:59 crc kubenswrapper[4667]: I0131 03:47:59.314300 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:59 crc kubenswrapper[4667]: I0131 03:47:59.314312 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:59 crc kubenswrapper[4667]: I0131 03:47:59.314590 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:47:59 crc kubenswrapper[4667]: I0131 03:47:59.316562 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:47:59 crc kubenswrapper[4667]: I0131 03:47:59.316600 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:47:59 crc kubenswrapper[4667]: I0131 03:47:59.316614 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:47:59 crc kubenswrapper[4667]: W0131 03:47:59.959013 4667 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.111:6443: connect: connection refused Jan 31 03:47:59 crc kubenswrapper[4667]: E0131 03:47:59.959136 4667 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.111:6443: connect: connection refused" logger="UnhandledError" Jan 31 03:48:00 crc kubenswrapper[4667]: I0131 03:48:00.201905 4667 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.111:6443: connect: connection refused Jan 31 03:48:00 crc kubenswrapper[4667]: I0131 03:48:00.206534 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 21:17:22.922349572 +0000 UTC Jan 31 03:48:00 crc kubenswrapper[4667]: E0131 03:48:00.212551 4667 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.111:6443: connect: connection refused" interval="3.2s" Jan 31 03:48:00 crc kubenswrapper[4667]: I0131 03:48:00.316742 4667 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe" exitCode=0 Jan 31 03:48:00 crc kubenswrapper[4667]: I0131 03:48:00.316815 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe"} Jan 31 03:48:00 crc kubenswrapper[4667]: I0131 03:48:00.316882 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:48:00 crc kubenswrapper[4667]: I0131 03:48:00.317648 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:00 crc kubenswrapper[4667]: I0131 03:48:00.317675 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:00 crc kubenswrapper[4667]: I0131 03:48:00.317684 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:00 crc kubenswrapper[4667]: I0131 03:48:00.319202 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"35b210ad25dbcd4bf7b51c2f927b5ca85daf9baccfc9d52bbc588be0116b0f79"} Jan 31 03:48:00 crc kubenswrapper[4667]: I0131 03:48:00.319274 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:48:00 crc kubenswrapper[4667]: I0131 03:48:00.320415 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:00 crc kubenswrapper[4667]: I0131 03:48:00.320442 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:00 crc kubenswrapper[4667]: I0131 03:48:00.320453 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:00 crc kubenswrapper[4667]: I0131 03:48:00.326533 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"eec15f3fe2b9b1c6827bc9093c19c1fe8cba5dc2aa0db3289e0a0b7029b8b09c"} Jan 31 03:48:00 crc kubenswrapper[4667]: I0131 03:48:00.326579 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c8a145cfd5492e6e2c3168e54747f3699b5148950bf88dc0431699e0dc6ff4fd"} Jan 31 03:48:00 crc kubenswrapper[4667]: I0131 03:48:00.326590 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"5f2362cecfaa0886df1bf67ce2fe0bc1f9586a785228c776daa0062302ae5f82"} Jan 31 03:48:00 crc kubenswrapper[4667]: I0131 03:48:00.326626 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:48:00 crc kubenswrapper[4667]: I0131 03:48:00.327593 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:00 crc kubenswrapper[4667]: I0131 03:48:00.327620 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:00 crc kubenswrapper[4667]: I0131 03:48:00.327638 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:00 crc kubenswrapper[4667]: I0131 03:48:00.331329 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e"} Jan 31 03:48:00 crc kubenswrapper[4667]: I0131 03:48:00.331372 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223"} Jan 31 03:48:00 crc kubenswrapper[4667]: I0131 03:48:00.331384 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e"} Jan 31 03:48:00 crc kubenswrapper[4667]: I0131 03:48:00.331387 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:48:00 crc kubenswrapper[4667]: I0131 03:48:00.331393 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec"} Jan 31 03:48:00 crc kubenswrapper[4667]: I0131 03:48:00.332478 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:00 crc kubenswrapper[4667]: I0131 03:48:00.332518 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:00 crc kubenswrapper[4667]: I0131 03:48:00.332531 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:00 crc kubenswrapper[4667]: W0131 03:48:00.342594 4667 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.111:6443: connect: connection refused Jan 31 03:48:00 crc kubenswrapper[4667]: E0131 03:48:00.342684 4667 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.111:6443: connect: connection refused" logger="UnhandledError" Jan 31 03:48:00 crc kubenswrapper[4667]: I0131 03:48:00.476431 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:48:00 crc kubenswrapper[4667]: I0131 03:48:00.477953 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:00 crc kubenswrapper[4667]: I0131 03:48:00.477995 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:00 crc kubenswrapper[4667]: I0131 03:48:00.478007 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:00 crc kubenswrapper[4667]: I0131 03:48:00.478031 4667 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 03:48:00 crc kubenswrapper[4667]: E0131 03:48:00.478451 4667 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.111:6443: connect: connection refused" node="crc" Jan 31 03:48:01 crc kubenswrapper[4667]: I0131 03:48:01.206641 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 17:54:05.121846253 +0000 UTC Jan 31 03:48:01 crc kubenswrapper[4667]: I0131 03:48:01.336896 4667 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3" exitCode=0 Jan 31 03:48:01 crc kubenswrapper[4667]: I0131 03:48:01.336960 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3"} Jan 31 03:48:01 crc kubenswrapper[4667]: I0131 03:48:01.337101 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:48:01 crc kubenswrapper[4667]: I0131 03:48:01.338319 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:01 crc kubenswrapper[4667]: I0131 03:48:01.338345 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:01 crc kubenswrapper[4667]: I0131 03:48:01.338357 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:01 crc kubenswrapper[4667]: I0131 03:48:01.341549 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051"} Jan 31 03:48:01 crc kubenswrapper[4667]: I0131 03:48:01.341573 4667 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 03:48:01 crc kubenswrapper[4667]: I0131 03:48:01.341615 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:48:01 crc kubenswrapper[4667]: I0131 03:48:01.341684 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:48:01 crc kubenswrapper[4667]: I0131 03:48:01.341686 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:48:01 crc kubenswrapper[4667]: I0131 03:48:01.342492 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:01 crc kubenswrapper[4667]: I0131 03:48:01.342520 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:01 crc kubenswrapper[4667]: I0131 03:48:01.342532 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:01 crc kubenswrapper[4667]: I0131 03:48:01.343244 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:01 crc kubenswrapper[4667]: I0131 03:48:01.343269 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:01 crc kubenswrapper[4667]: I0131 03:48:01.343279 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:01 crc kubenswrapper[4667]: I0131 03:48:01.343244 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:01 crc kubenswrapper[4667]: I0131 03:48:01.343321 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:01 crc kubenswrapper[4667]: I0131 03:48:01.343337 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:01 crc kubenswrapper[4667]: I0131 03:48:01.809792 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:48:01 crc kubenswrapper[4667]: I0131 03:48:01.875427 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 03:48:01 crc kubenswrapper[4667]: I0131 03:48:01.875619 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:48:01 crc kubenswrapper[4667]: I0131 03:48:01.876703 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:01 crc kubenswrapper[4667]: I0131 03:48:01.876759 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:01 crc kubenswrapper[4667]: I0131 03:48:01.876771 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:02 crc kubenswrapper[4667]: I0131 03:48:02.207371 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 01:39:20.839891148 +0000 UTC Jan 31 03:48:02 crc kubenswrapper[4667]: I0131 03:48:02.349005 4667 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 03:48:02 crc kubenswrapper[4667]: I0131 03:48:02.349006 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e068f8011041fbb83af5bf15d9f856fb111b3fd48d3707507df895249b125646"} Jan 31 03:48:02 crc kubenswrapper[4667]: I0131 03:48:02.349061 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:48:02 crc kubenswrapper[4667]: I0131 03:48:02.349064 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e8b94e5ba5276aa39d01479c1eb697edafb939d0e62ec593eed1628e7735e95d"} Jan 31 03:48:02 crc kubenswrapper[4667]: I0131 03:48:02.349087 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"92ce78e24e1cbf1115918bbd93da300b4efa5434f21bf1a11669f702a894f64f"} Jan 31 03:48:02 crc kubenswrapper[4667]: I0131 03:48:02.349105 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8127777e243fb5e93d9dd430fb28ccc91a340dfd6b4169ebac2f3167e5ea1660"} Jan 31 03:48:02 crc kubenswrapper[4667]: I0131 03:48:02.349120 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"586bfc35d3a6f331a069b76d004135156f1b13db4afcf14f1404cba6c4ec3627"} Jan 31 03:48:02 crc kubenswrapper[4667]: I0131 03:48:02.349199 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:48:02 crc kubenswrapper[4667]: I0131 03:48:02.350118 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:02 crc kubenswrapper[4667]: I0131 03:48:02.350160 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:02 crc kubenswrapper[4667]: I0131 03:48:02.350176 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:02 crc kubenswrapper[4667]: I0131 03:48:02.351378 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:02 crc kubenswrapper[4667]: I0131 03:48:02.351416 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:02 crc kubenswrapper[4667]: I0131 03:48:02.351426 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:02 crc kubenswrapper[4667]: I0131 03:48:02.380305 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 03:48:02 crc kubenswrapper[4667]: I0131 03:48:02.380480 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:48:02 crc kubenswrapper[4667]: I0131 03:48:02.382523 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:02 crc kubenswrapper[4667]: I0131 03:48:02.382570 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:02 crc kubenswrapper[4667]: I0131 03:48:02.382578 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:03 crc kubenswrapper[4667]: I0131 03:48:03.208022 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 04:17:38.786024257 +0000 UTC Jan 31 03:48:03 crc kubenswrapper[4667]: I0131 03:48:03.351660 4667 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 03:48:03 crc kubenswrapper[4667]: I0131 03:48:03.351734 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:48:03 crc kubenswrapper[4667]: I0131 03:48:03.351664 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:48:03 crc kubenswrapper[4667]: I0131 03:48:03.352768 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:03 crc kubenswrapper[4667]: I0131 03:48:03.352796 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:03 crc kubenswrapper[4667]: I0131 03:48:03.352807 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:03 crc kubenswrapper[4667]: I0131 03:48:03.353005 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:03 crc kubenswrapper[4667]: I0131 03:48:03.353064 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:03 crc kubenswrapper[4667]: I0131 03:48:03.353083 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:03 crc kubenswrapper[4667]: I0131 03:48:03.422177 4667 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 03:48:03 crc kubenswrapper[4667]: I0131 03:48:03.547277 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 03:48:03 crc kubenswrapper[4667]: I0131 03:48:03.547520 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:48:03 crc kubenswrapper[4667]: I0131 03:48:03.549230 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:03 crc kubenswrapper[4667]: I0131 03:48:03.549285 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:03 crc kubenswrapper[4667]: I0131 03:48:03.549308 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:03 crc kubenswrapper[4667]: I0131 03:48:03.558123 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 03:48:03 crc kubenswrapper[4667]: I0131 03:48:03.678572 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:48:03 crc kubenswrapper[4667]: I0131 03:48:03.680651 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:03 crc kubenswrapper[4667]: I0131 03:48:03.680727 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:03 crc kubenswrapper[4667]: I0131 03:48:03.680746 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:03 crc kubenswrapper[4667]: I0131 03:48:03.680792 4667 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 03:48:03 crc kubenswrapper[4667]: I0131 03:48:03.736603 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:48:04 crc kubenswrapper[4667]: I0131 03:48:04.209009 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 01:04:35.87040258 +0000 UTC Jan 31 03:48:04 crc kubenswrapper[4667]: I0131 03:48:04.354495 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:48:04 crc kubenswrapper[4667]: I0131 03:48:04.354541 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 03:48:04 crc kubenswrapper[4667]: I0131 03:48:04.354495 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:48:04 crc kubenswrapper[4667]: I0131 03:48:04.356130 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:04 crc kubenswrapper[4667]: I0131 03:48:04.356185 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:04 crc kubenswrapper[4667]: I0131 03:48:04.356203 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:04 crc kubenswrapper[4667]: I0131 03:48:04.356141 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:04 crc kubenswrapper[4667]: I0131 03:48:04.356311 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:04 crc kubenswrapper[4667]: I0131 03:48:04.356365 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:05 crc kubenswrapper[4667]: I0131 03:48:05.210144 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 02:51:51.585498077 +0000 UTC Jan 31 03:48:05 crc kubenswrapper[4667]: I0131 03:48:05.357567 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:48:05 crc kubenswrapper[4667]: I0131 03:48:05.358820 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:05 crc kubenswrapper[4667]: I0131 03:48:05.358898 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:05 crc kubenswrapper[4667]: I0131 03:48:05.358923 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:05 crc kubenswrapper[4667]: I0131 03:48:05.380689 4667 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 03:48:05 crc kubenswrapper[4667]: I0131 03:48:05.380767 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 03:48:05 crc kubenswrapper[4667]: I0131 03:48:05.515509 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 03:48:05 crc kubenswrapper[4667]: I0131 03:48:05.515790 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:48:05 crc kubenswrapper[4667]: I0131 03:48:05.517648 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:05 crc kubenswrapper[4667]: I0131 03:48:05.517691 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:05 crc kubenswrapper[4667]: I0131 03:48:05.517704 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:05 crc kubenswrapper[4667]: I0131 03:48:05.870537 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 31 03:48:05 crc kubenswrapper[4667]: I0131 03:48:05.870792 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:48:05 crc kubenswrapper[4667]: I0131 03:48:05.872567 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:05 crc kubenswrapper[4667]: I0131 03:48:05.872616 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:05 crc kubenswrapper[4667]: I0131 03:48:05.872635 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:06 crc kubenswrapper[4667]: I0131 03:48:06.190986 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:48:06 crc kubenswrapper[4667]: I0131 03:48:06.191229 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:48:06 crc kubenswrapper[4667]: I0131 03:48:06.192738 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:06 crc kubenswrapper[4667]: I0131 03:48:06.192795 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:06 crc kubenswrapper[4667]: I0131 03:48:06.192814 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:06 crc kubenswrapper[4667]: I0131 03:48:06.210971 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 20:34:37.720434996 +0000 UTC Jan 31 03:48:07 crc kubenswrapper[4667]: I0131 03:48:07.211939 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 03:09:13.088656281 +0000 UTC Jan 31 03:48:07 crc kubenswrapper[4667]: I0131 03:48:07.260316 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 31 03:48:07 crc kubenswrapper[4667]: I0131 03:48:07.260552 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:48:07 crc kubenswrapper[4667]: I0131 03:48:07.261873 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:07 crc kubenswrapper[4667]: I0131 03:48:07.261911 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:07 crc kubenswrapper[4667]: I0131 03:48:07.261922 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:07 crc kubenswrapper[4667]: E0131 03:48:07.706053 4667 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 31 03:48:08 crc kubenswrapper[4667]: I0131 03:48:08.212604 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 13:12:20.179520477 +0000 UTC Jan 31 03:48:08 crc kubenswrapper[4667]: I0131 03:48:08.938928 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 03:48:08 crc kubenswrapper[4667]: I0131 03:48:08.939170 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:48:08 crc kubenswrapper[4667]: I0131 03:48:08.940774 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:08 crc kubenswrapper[4667]: I0131 03:48:08.941035 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:08 crc kubenswrapper[4667]: I0131 03:48:08.941169 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:09 crc kubenswrapper[4667]: I0131 03:48:09.213711 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 02:43:39.461329596 +0000 UTC Jan 31 03:48:10 crc kubenswrapper[4667]: I0131 03:48:10.215708 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 15:27:36.206626126 +0000 UTC Jan 31 03:48:10 crc kubenswrapper[4667]: I0131 03:48:10.763473 4667 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 31 03:48:10 crc kubenswrapper[4667]: I0131 03:48:10.763553 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 31 03:48:10 crc kubenswrapper[4667]: I0131 03:48:10.772308 4667 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 31 03:48:10 crc kubenswrapper[4667]: I0131 03:48:10.772517 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 31 03:48:11 crc kubenswrapper[4667]: I0131 03:48:11.216733 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 08:46:27.631322646 +0000 UTC Jan 31 03:48:11 crc kubenswrapper[4667]: I0131 03:48:11.294931 4667 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 31 03:48:11 crc kubenswrapper[4667]: I0131 03:48:11.295034 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 31 03:48:11 crc kubenswrapper[4667]: I0131 03:48:11.838271 4667 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 31 03:48:11 crc kubenswrapper[4667]: [+]log ok Jan 31 03:48:11 crc kubenswrapper[4667]: [+]etcd ok Jan 31 03:48:11 crc kubenswrapper[4667]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 31 03:48:11 crc kubenswrapper[4667]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 31 03:48:11 crc kubenswrapper[4667]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 31 03:48:11 crc kubenswrapper[4667]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 31 03:48:11 crc kubenswrapper[4667]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 31 03:48:11 crc kubenswrapper[4667]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 31 03:48:11 crc kubenswrapper[4667]: [+]poststarthook/generic-apiserver-start-informers ok Jan 31 03:48:11 crc kubenswrapper[4667]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 31 03:48:11 crc kubenswrapper[4667]: [+]poststarthook/priority-and-fairness-filter ok Jan 31 03:48:11 crc kubenswrapper[4667]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 31 03:48:11 crc kubenswrapper[4667]: [+]poststarthook/start-apiextensions-informers ok Jan 31 03:48:11 crc kubenswrapper[4667]: [+]poststarthook/start-apiextensions-controllers ok Jan 31 03:48:11 crc kubenswrapper[4667]: [+]poststarthook/crd-informer-synced ok Jan 31 03:48:11 crc kubenswrapper[4667]: [+]poststarthook/start-system-namespaces-controller ok Jan 31 03:48:11 crc kubenswrapper[4667]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 31 03:48:11 crc kubenswrapper[4667]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 31 03:48:11 crc kubenswrapper[4667]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 31 03:48:11 crc kubenswrapper[4667]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 31 03:48:11 crc kubenswrapper[4667]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 31 03:48:11 crc kubenswrapper[4667]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 31 03:48:11 crc kubenswrapper[4667]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 31 03:48:11 crc kubenswrapper[4667]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 31 03:48:11 crc kubenswrapper[4667]: [+]poststarthook/bootstrap-controller ok Jan 31 03:48:11 crc kubenswrapper[4667]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 31 03:48:11 crc kubenswrapper[4667]: [+]poststarthook/start-kube-aggregator-informers ok Jan 31 03:48:11 crc kubenswrapper[4667]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 31 03:48:11 crc kubenswrapper[4667]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 31 03:48:11 crc kubenswrapper[4667]: [+]poststarthook/apiservice-registration-controller ok Jan 31 03:48:11 crc kubenswrapper[4667]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 31 03:48:11 crc kubenswrapper[4667]: [+]poststarthook/apiservice-discovery-controller ok Jan 31 03:48:11 crc kubenswrapper[4667]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 31 03:48:11 crc kubenswrapper[4667]: [+]autoregister-completion ok Jan 31 03:48:11 crc kubenswrapper[4667]: [+]poststarthook/apiservice-openapi-controller ok Jan 31 03:48:11 crc kubenswrapper[4667]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 31 03:48:11 crc kubenswrapper[4667]: livez check failed Jan 31 03:48:11 crc kubenswrapper[4667]: I0131 03:48:11.838353 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 03:48:12 crc kubenswrapper[4667]: I0131 03:48:12.217186 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 06:13:18.606785248 +0000 UTC Jan 31 03:48:13 crc kubenswrapper[4667]: I0131 03:48:13.217872 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 16:19:28.789661458 +0000 UTC Jan 31 03:48:13 crc kubenswrapper[4667]: I0131 03:48:13.737526 4667 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 31 03:48:13 crc kubenswrapper[4667]: I0131 03:48:13.737612 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 31 03:48:14 crc kubenswrapper[4667]: I0131 03:48:14.218664 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 02:26:53.563334852 +0000 UTC Jan 31 03:48:15 crc kubenswrapper[4667]: I0131 03:48:15.219818 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 13:38:58.742107882 +0000 UTC Jan 31 03:48:15 crc kubenswrapper[4667]: I0131 03:48:15.382212 4667 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 03:48:15 crc kubenswrapper[4667]: I0131 03:48:15.382318 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 03:48:15 crc kubenswrapper[4667]: I0131 03:48:15.751888 4667 trace.go:236] Trace[1777373020]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 03:48:05.466) (total time: 10285ms): Jan 31 03:48:15 crc kubenswrapper[4667]: Trace[1777373020]: ---"Objects listed" error: 10285ms (03:48:15.751) Jan 31 03:48:15 crc kubenswrapper[4667]: Trace[1777373020]: [10.285427381s] [10.285427381s] END Jan 31 03:48:15 crc kubenswrapper[4667]: I0131 03:48:15.751925 4667 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 31 03:48:15 crc kubenswrapper[4667]: I0131 03:48:15.752832 4667 trace.go:236] Trace[67865317]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 03:48:00.793) (total time: 14959ms): Jan 31 03:48:15 crc kubenswrapper[4667]: Trace[67865317]: ---"Objects listed" error: 14958ms (03:48:15.752) Jan 31 03:48:15 crc kubenswrapper[4667]: Trace[67865317]: [14.959014613s] [14.959014613s] END Jan 31 03:48:15 crc kubenswrapper[4667]: I0131 03:48:15.752870 4667 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 31 03:48:15 crc kubenswrapper[4667]: I0131 03:48:15.754843 4667 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 31 03:48:15 crc kubenswrapper[4667]: E0131 03:48:15.759368 4667 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 31 03:48:15 crc kubenswrapper[4667]: E0131 03:48:15.783385 4667 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 31 03:48:15 crc kubenswrapper[4667]: I0131 03:48:15.784969 4667 trace.go:236] Trace[946747010]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 03:48:00.966) (total time: 14818ms): Jan 31 03:48:15 crc kubenswrapper[4667]: Trace[946747010]: ---"Objects listed" error: 14818ms (03:48:15.784) Jan 31 03:48:15 crc kubenswrapper[4667]: Trace[946747010]: [14.818609989s] [14.818609989s] END Jan 31 03:48:15 crc kubenswrapper[4667]: I0131 03:48:15.785000 4667 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 31 03:48:15 crc kubenswrapper[4667]: I0131 03:48:15.797602 4667 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 31 03:48:15 crc kubenswrapper[4667]: I0131 03:48:15.804447 4667 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 31 03:48:15 crc kubenswrapper[4667]: I0131 03:48:15.904245 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 31 03:48:15 crc kubenswrapper[4667]: I0131 03:48:15.917566 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.197564 4667 apiserver.go:52] "Watching apiserver" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.203151 4667 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.203533 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-etcd/etcd-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.203972 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:16 crc kubenswrapper[4667]: E0131 03:48:16.204153 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.203996 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:16 crc kubenswrapper[4667]: E0131 03:48:16.204375 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.204195 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.204197 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.204255 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:16 crc kubenswrapper[4667]: E0131 03:48:16.204808 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.203970 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.205362 4667 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.205740 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.205777 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.205796 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.205817 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.205839 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.205878 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.205896 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.205913 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.205930 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.205948 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.205964 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.205979 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206024 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206040 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206056 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206073 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206089 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206109 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206126 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206144 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206165 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206186 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206204 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206220 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206235 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206253 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206270 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206286 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206303 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206320 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206337 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206354 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206373 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206392 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206412 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206429 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206449 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206468 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206485 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206501 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206520 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206538 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206556 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206573 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206603 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206624 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206644 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206662 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206679 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206702 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206722 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206743 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206762 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206780 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206799 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206838 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206883 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206910 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206934 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206955 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206973 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.206995 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.207022 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.207041 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.207059 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.207083 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.207107 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.207129 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.207150 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.207172 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.207199 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.207222 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.207247 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.207294 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.207318 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.207341 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.207365 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.207386 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.207407 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.207428 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.207451 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.207473 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.207496 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.207520 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.207543 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.207564 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.207587 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.207610 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.207644 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.207666 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.207689 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.207715 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.207962 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.209940 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.210024 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.210051 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.210103 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.210141 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.210166 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.210190 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.210215 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.210238 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.210261 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.210285 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.210312 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.210334 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.210357 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.210379 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.207340 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.210405 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.210429 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.210456 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.210481 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.210506 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.210533 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.210558 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.210572 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.210580 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.207644 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.210781 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.211355 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.212297 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.212556 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.212705 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.213160 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.213319 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.213594 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.213631 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.214081 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.214180 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.214522 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.214851 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.215240 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.215275 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.208300 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.208448 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.208501 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.208655 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.208693 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.208993 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.209203 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.209408 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.209481 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.216566 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.216804 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.216875 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.216956 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.217021 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.217197 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.209487 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.209562 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.209614 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.217678 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.209711 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.217991 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.209770 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.218116 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.209785 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.210188 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.210389 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.216144 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.207564 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.216219 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.210003 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.218225 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.218285 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.218404 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.218477 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.218734 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.218992 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.219499 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.220549 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.220615 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 13:54:42.952652451 +0000 UTC Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.220960 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.221072 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.221108 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.221127 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.221172 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.221228 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.221352 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.221467 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.221517 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.222046 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.222171 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.222313 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.223004 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.223044 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.223098 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.223381 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.223923 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.224288 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.224492 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.224649 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.225946 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.226060 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.226542 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.226790 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.226986 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.227170 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.227524 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.227395 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.227954 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.228563 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.228584 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.228938 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.229048 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.229314 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.229347 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.229451 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.228960 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.208119 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.210583 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.230193 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.230227 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.230250 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.230364 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.230399 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.230431 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.230459 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.230512 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.230532 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.230550 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.230570 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.230588 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.230606 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.230628 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.230645 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.230662 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.230680 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.230697 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.230714 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.230734 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.230771 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.230889 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.230902 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.230921 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.230950 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.230974 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.230998 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.231021 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.231043 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.231066 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.231090 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.231112 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.231136 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.231158 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.231182 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.231205 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.231275 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.231299 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.231321 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.231344 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.231366 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.231388 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.231410 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.231433 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.231456 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.231477 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.231508 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.231529 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.231550 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.231573 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.231595 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.231595 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.231618 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.231646 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.231671 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.231692 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.231714 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.231737 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.231762 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.231783 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.231808 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.231840 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.231850 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.231884 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.231914 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.231941 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.231966 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.231992 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.231994 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.232015 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.232044 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.232074 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.232100 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.232127 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.232157 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.232183 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.232208 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.232232 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.232257 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.232282 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.232308 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.232333 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.232359 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.232419 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.232427 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.232958 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.232995 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.233022 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.233044 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.233064 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.233103 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.233260 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.233145 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.233361 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.233405 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.233433 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.233463 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.233491 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.233521 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.233555 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.233584 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.233748 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.233767 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.233783 4667 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.233795 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.233810 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.233822 4667 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.233838 4667 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.233850 4667 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.233903 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.233917 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.233930 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.233942 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.233954 4667 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.233967 4667 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.233980 4667 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.233992 4667 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234005 4667 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234017 4667 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234031 4667 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234044 4667 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234057 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234070 4667 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234082 4667 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234094 4667 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234107 4667 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234118 4667 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234130 4667 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234141 4667 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234156 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234168 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234179 4667 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234191 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234202 4667 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234215 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234227 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234238 4667 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234252 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234265 4667 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234275 4667 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234287 4667 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234300 4667 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234313 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234326 4667 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234338 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234350 4667 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234363 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234375 4667 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234387 4667 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234400 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234413 4667 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234425 4667 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234438 4667 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234450 4667 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234463 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234476 4667 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234489 4667 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234501 4667 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234522 4667 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234537 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234568 4667 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234581 4667 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234597 4667 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234613 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234630 4667 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234644 4667 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234750 4667 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234848 4667 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234928 4667 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234949 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234962 4667 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234975 4667 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234990 4667 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.235007 4667 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.235025 4667 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.235040 4667 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.235055 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.235071 4667 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.235085 4667 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.235099 4667 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.235117 4667 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.235138 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.235153 4667 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.235169 4667 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.235183 4667 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.235196 4667 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.235209 4667 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.235227 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.235237 4667 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.235247 4667 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.235259 4667 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.235269 4667 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.235279 4667 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.235291 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.235301 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234048 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234186 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.234462 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.235125 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.235386 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.235701 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 03:48:16 crc kubenswrapper[4667]: E0131 03:48:16.235918 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:48:16.735515068 +0000 UTC m=+20.251850377 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.236531 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.237049 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.237274 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.237633 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.237992 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.238011 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.238277 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.238588 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.238901 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.239136 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.239159 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.239334 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.239399 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.239472 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.239511 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.239682 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.239744 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.240085 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.240092 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.240137 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.240459 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.240496 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.240794 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.241082 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.241087 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.241152 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.241380 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.241535 4667 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.242201 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.242452 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.242474 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.242689 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: E0131 03:48:16.246237 4667 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 03:48:16 crc kubenswrapper[4667]: E0131 03:48:16.246352 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 03:48:16.746324606 +0000 UTC m=+20.262659915 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.246405 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: E0131 03:48:16.246645 4667 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 03:48:16 crc kubenswrapper[4667]: E0131 03:48:16.246760 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 03:48:16.746745987 +0000 UTC m=+20.263081296 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.247307 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.247471 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.251621 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.251772 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.251920 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.252061 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.252261 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.252400 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.252394 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.252558 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.252802 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.252911 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.253076 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.253175 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.253171 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.253513 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.253540 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.253756 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.254662 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.254717 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.255167 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.255414 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.257433 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.255830 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.255912 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.256305 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.257575 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.257975 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.258040 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.258062 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.258302 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.258448 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.258872 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.259483 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.259799 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.256322 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.256331 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.256531 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.256675 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.256764 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.256870 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.257009 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.257047 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.257203 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: E0131 03:48:16.260180 4667 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 03:48:16 crc kubenswrapper[4667]: E0131 03:48:16.260202 4667 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 03:48:16 crc kubenswrapper[4667]: E0131 03:48:16.260215 4667 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:48:16 crc kubenswrapper[4667]: E0131 03:48:16.260285 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 03:48:16.760266358 +0000 UTC m=+20.276601657 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.260402 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.260454 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.260536 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.261064 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.261576 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.263340 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.265100 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.267455 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.275530 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.275792 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.275914 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.276590 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.277540 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.280561 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.282165 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.282238 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.282315 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.289601 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 03:48:16 crc kubenswrapper[4667]: E0131 03:48:16.289615 4667 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 03:48:16 crc kubenswrapper[4667]: E0131 03:48:16.289737 4667 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 03:48:16 crc kubenswrapper[4667]: E0131 03:48:16.289760 4667 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:48:16 crc kubenswrapper[4667]: E0131 03:48:16.289840 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 03:48:16.789810596 +0000 UTC m=+20.306146125 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.291579 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.291692 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.292029 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.292884 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.293272 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.295524 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.307089 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.308746 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.320494 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.340677 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341083 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341129 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341180 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341191 4667 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341204 4667 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341213 4667 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341222 4667 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341232 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341241 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341250 4667 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341260 4667 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341270 4667 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341279 4667 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341287 4667 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341296 4667 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341304 4667 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341313 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341322 4667 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341331 4667 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341341 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341351 4667 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341359 4667 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341369 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341377 4667 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341385 4667 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341394 4667 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341404 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341413 4667 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341421 4667 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341430 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341438 4667 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341445 4667 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341454 4667 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341462 4667 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341470 4667 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341479 4667 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341487 4667 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341496 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341506 4667 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341515 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341525 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341548 4667 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341557 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341566 4667 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341575 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341584 4667 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341593 4667 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341601 4667 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341610 4667 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341619 4667 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341628 4667 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341637 4667 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341646 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341656 4667 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341666 4667 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341675 4667 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341684 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341695 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341709 4667 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341718 4667 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341727 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341741 4667 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341749 4667 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341757 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341765 4667 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341773 4667 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341781 4667 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341790 4667 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341799 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341808 4667 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341818 4667 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341826 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341838 4667 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341847 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341911 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341920 4667 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341929 4667 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341943 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341951 4667 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341959 4667 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341968 4667 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341976 4667 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341985 4667 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.341993 4667 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.342001 4667 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.342011 4667 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.342024 4667 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.342032 4667 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.342039 4667 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.342048 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.342057 4667 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.342065 4667 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.342073 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.342083 4667 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.342091 4667 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.342099 4667 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.342107 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.342114 4667 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.342123 4667 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.342130 4667 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.342138 4667 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.343279 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.343915 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.344333 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.380967 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.386775 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.396503 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.397791 4667 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051" exitCode=255 Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.397958 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051"} Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.405397 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f495ddf-247c-4cac-979b-710342a770f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8127777e243fb5e93d9dd430fb28ccc91a340dfd6b4169ebac2f3167e5ea1660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ce78e24e1cbf1115918bbd93da300b4efa5434f21bf1a11669f702a894f64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b94e5ba5276aa39d01479c1eb697edafb939d0e62ec593eed1628e7735e95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e068f8011041fbb83af5bf15d9f856fb111b3fd48d3707507df895249b125646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586bfc35d3a6f331a069b76d004135156f1b13db4afcf14f1404cba6c4ec3627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 03:48:16 crc kubenswrapper[4667]: E0131 03:48:16.409878 4667 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.416695 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.418989 4667 scope.go:117] "RemoveContainer" containerID="8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.421068 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.429921 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.440133 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.442608 4667 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.451597 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.462782 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.474104 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.484759 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.508737 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f495ddf-247c-4cac-979b-710342a770f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8127777e243fb5e93d9dd430fb28ccc91a340dfd6b4169ebac2f3167e5ea1660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ce78e24e1cbf1115918bbd93da300b4efa5434f21bf1a11669f702a894f64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b94e5ba5276aa39d01479c1eb697edafb939d0e62ec593eed1628e7735e95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e068f8011041fbb83af5bf15d9f856fb111b3fd48d3707507df895249b125646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586bfc35d3a6f331a069b76d004135156f1b13db4afcf14f1404cba6c4ec3627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.524911 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10ccda3-d9b2-4d01-897a-8498aee530b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 03:48:15.785649 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 03:48:15.786510 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:48:15.790183 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1119535395/tls.crt::/tmp/serving-cert-1119535395/tls.key\\\\\\\"\\\\nI0131 03:48:16.086916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 03:48:16.089052 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 03:48:16.089068 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 03:48:16.089086 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 03:48:16.089091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 03:48:16.097787 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 03:48:16.097804 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 03:48:16.097818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 03:48:16.097822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 03:48:16.097825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 03:48:16.098030 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 03:48:16.100791 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.536281 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.537245 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 03:48:16 crc kubenswrapper[4667]: W0131 03:48:16.548964 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-8107aa557a168e2c8eacdf892fc8751094758165d683f585c575254983a17dc5 WatchSource:0}: Error finding container 8107aa557a168e2c8eacdf892fc8751094758165d683f585c575254983a17dc5: Status 404 returned error can't find the container with id 8107aa557a168e2c8eacdf892fc8751094758165d683f585c575254983a17dc5 Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.551245 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.603088 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 03:48:16 crc kubenswrapper[4667]: W0131 03:48:16.611053 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-5c9639bcdb046491afeec213d74af5e2041765ef0bc6e380ef1451e14c8e5f8c WatchSource:0}: Error finding container 5c9639bcdb046491afeec213d74af5e2041765ef0bc6e380ef1451e14c8e5f8c: Status 404 returned error can't find the container with id 5c9639bcdb046491afeec213d74af5e2041765ef0bc6e380ef1451e14c8e5f8c Jan 31 03:48:16 crc kubenswrapper[4667]: W0131 03:48:16.637343 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-4b4f981c4ee786426086e15adb2f38e613ffd7abe76c987572abe40c8c00ba7e WatchSource:0}: Error finding container 4b4f981c4ee786426086e15adb2f38e613ffd7abe76c987572abe40c8c00ba7e: Status 404 returned error can't find the container with id 4b4f981c4ee786426086e15adb2f38e613ffd7abe76c987572abe40c8c00ba7e Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.745571 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:48:16 crc kubenswrapper[4667]: E0131 03:48:16.745748 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:48:17.745688565 +0000 UTC m=+21.262023864 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.818450 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.836831 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.846614 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.846664 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.846710 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.846737 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:16 crc kubenswrapper[4667]: E0131 03:48:16.846895 4667 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 03:48:16 crc kubenswrapper[4667]: E0131 03:48:16.846923 4667 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 03:48:16 crc kubenswrapper[4667]: E0131 03:48:16.846947 4667 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 03:48:16 crc kubenswrapper[4667]: E0131 03:48:16.846960 4667 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:48:16 crc kubenswrapper[4667]: E0131 03:48:16.846994 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 03:48:17.846970026 +0000 UTC m=+21.363305325 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 03:48:16 crc kubenswrapper[4667]: E0131 03:48:16.847013 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 03:48:17.847005097 +0000 UTC m=+21.363340396 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:48:16 crc kubenswrapper[4667]: E0131 03:48:16.847019 4667 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 03:48:16 crc kubenswrapper[4667]: E0131 03:48:16.847049 4667 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 03:48:16 crc kubenswrapper[4667]: E0131 03:48:16.847065 4667 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 03:48:16 crc kubenswrapper[4667]: E0131 03:48:16.847074 4667 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:48:16 crc kubenswrapper[4667]: E0131 03:48:16.847081 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 03:48:17.847062449 +0000 UTC m=+21.363397748 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 03:48:16 crc kubenswrapper[4667]: E0131 03:48:16.847114 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 03:48:17.84709494 +0000 UTC m=+21.363430329 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.849580 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.858594 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.885595 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f495ddf-247c-4cac-979b-710342a770f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8127777e243fb5e93d9dd430fb28ccc91a340dfd6b4169ebac2f3167e5ea1660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ce78e24e1cbf1115918bbd93da300b4efa5434f21bf1a11669f702a894f64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b94e5ba5276aa39d01479c1eb697edafb939d0e62ec593eed1628e7735e95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e068f8011041fbb83af5bf15d9f856fb111b3fd48d3707507df895249b125646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586bfc35d3a6f331a069b76d004135156f1b13db4afcf14f1404cba6c4ec3627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.896894 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10ccda3-d9b2-4d01-897a-8498aee530b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 03:48:15.785649 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 03:48:15.786510 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:48:15.790183 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1119535395/tls.crt::/tmp/serving-cert-1119535395/tls.key\\\\\\\"\\\\nI0131 03:48:16.086916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 03:48:16.089052 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 03:48:16.089068 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 03:48:16.089086 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 03:48:16.089091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 03:48:16.097787 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 03:48:16.097804 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 03:48:16.097818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 03:48:16.097822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 03:48:16.097825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 03:48:16.098030 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 03:48:16.100791 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.907202 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.915617 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 03:48:16 crc kubenswrapper[4667]: I0131 03:48:16.924955 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.221764 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 07:47:42.143918608 +0000 UTC Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.284424 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.285153 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.286500 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.287220 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.288285 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.288873 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.289544 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.290530 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.291256 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.292194 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.292740 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.293933 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.294514 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.295129 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.297364 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.297974 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.299128 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.299562 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.300201 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.301429 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.302174 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.303373 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.304148 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.305222 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.305824 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.306866 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.307323 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.309469 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.309946 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.310735 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.311752 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.312382 4667 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.312528 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.314341 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.315020 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.315570 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.317239 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.318237 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.319024 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.320573 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.320653 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.321680 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.322291 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.323094 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.323934 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.324584 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.325747 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.326346 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.327347 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.328280 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.329162 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.329624 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.330489 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.330992 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.331557 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.332423 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.334071 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.347342 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.357998 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.372937 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.391673 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f495ddf-247c-4cac-979b-710342a770f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8127777e243fb5e93d9dd430fb28ccc91a340dfd6b4169ebac2f3167e5ea1660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ce78e24e1cbf1115918bbd93da300b4efa5434f21bf1a11669f702a894f64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b94e5ba5276aa39d01479c1eb697edafb939d0e62ec593eed1628e7735e95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e068f8011041fbb83af5bf15d9f856fb111b3fd48d3707507df895249b125646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586bfc35d3a6f331a069b76d004135156f1b13db4afcf14f1404cba6c4ec3627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.401920 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"8107aa557a168e2c8eacdf892fc8751094758165d683f585c575254983a17dc5"} Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.404072 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.405585 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe"} Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.405982 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.406884 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"4e8e5fbf5b62418d8b08ccaafaf9f565b19d0d1ab8dc1ad4151af14790cf4aa9"} Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.406921 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"4b4f981c4ee786426086e15adb2f38e613ffd7abe76c987572abe40c8c00ba7e"} Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.408829 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"523e97dbbec93313d682bbe37cf3b8cf49936d91c8f60915bf1d8849bd53f4b8"} Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.408891 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"1730e8905dbea5ca3056d2002abe78755bdca22f3fbd66a11bb6c000b2289945"} Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.408903 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"5c9639bcdb046491afeec213d74af5e2041765ef0bc6e380ef1451e14c8e5f8c"} Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.410026 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.410020 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10ccda3-d9b2-4d01-897a-8498aee530b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 03:48:15.785649 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 03:48:15.786510 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:48:15.790183 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1119535395/tls.crt::/tmp/serving-cert-1119535395/tls.key\\\\\\\"\\\\nI0131 03:48:16.086916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 03:48:16.089052 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 03:48:16.089068 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 03:48:16.089086 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 03:48:16.089091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 03:48:16.097787 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 03:48:16.097804 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 03:48:16.097818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 03:48:16.097822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 03:48:16.097825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 03:48:16.098030 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 03:48:16.100791 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.420793 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.442916 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e8e5fbf5b62418d8b08ccaafaf9f565b19d0d1ab8dc1ad4151af14790cf4aa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.475440 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523e97dbbec93313d682bbe37cf3b8cf49936d91c8f60915bf1d8849bd53f4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1730e8905dbea5ca3056d2002abe78755bdca22f3fbd66a11bb6c000b2289945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.495649 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f495ddf-247c-4cac-979b-710342a770f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8127777e243fb5e93d9dd430fb28ccc91a340dfd6b4169ebac2f3167e5ea1660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ce78e24e1cbf1115918bbd93da300b4efa5434f21bf1a11669f702a894f64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b94e5ba5276aa39d01479c1eb697edafb939d0e62ec593eed1628e7735e95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e068f8011041fbb83af5bf15d9f856fb111b3fd48d3707507df895249b125646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586bfc35d3a6f331a069b76d004135156f1b13db4afcf14f1404cba6c4ec3627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.511870 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10ccda3-d9b2-4d01-897a-8498aee530b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 03:48:15.785649 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 03:48:15.786510 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:48:15.790183 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1119535395/tls.crt::/tmp/serving-cert-1119535395/tls.key\\\\\\\"\\\\nI0131 03:48:16.086916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 03:48:16.089052 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 03:48:16.089068 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 03:48:16.089086 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 03:48:16.089091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 03:48:16.097787 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 03:48:16.097804 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 03:48:16.097818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 03:48:16.097822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 03:48:16.097825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 03:48:16.098030 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 03:48:16.100791 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.525263 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.538691 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.552120 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.753634 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:48:17 crc kubenswrapper[4667]: E0131 03:48:17.753817 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:48:19.753780233 +0000 UTC m=+23.270115532 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.854556 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.854614 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.854642 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:17 crc kubenswrapper[4667]: I0131 03:48:17.854664 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:17 crc kubenswrapper[4667]: E0131 03:48:17.854796 4667 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 03:48:17 crc kubenswrapper[4667]: E0131 03:48:17.854806 4667 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 03:48:17 crc kubenswrapper[4667]: E0131 03:48:17.854815 4667 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 03:48:17 crc kubenswrapper[4667]: E0131 03:48:17.854899 4667 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 03:48:17 crc kubenswrapper[4667]: E0131 03:48:17.854913 4667 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:48:17 crc kubenswrapper[4667]: E0131 03:48:17.854925 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 03:48:19.85490212 +0000 UTC m=+23.371237429 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 03:48:17 crc kubenswrapper[4667]: E0131 03:48:17.854832 4667 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 03:48:17 crc kubenswrapper[4667]: E0131 03:48:17.855019 4667 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:48:17 crc kubenswrapper[4667]: E0131 03:48:17.854813 4667 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 03:48:17 crc kubenswrapper[4667]: E0131 03:48:17.855079 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 03:48:19.854980412 +0000 UTC m=+23.371315701 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:48:17 crc kubenswrapper[4667]: E0131 03:48:17.855101 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 03:48:19.855092685 +0000 UTC m=+23.371428114 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:48:17 crc kubenswrapper[4667]: E0131 03:48:17.855113 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 03:48:19.855108106 +0000 UTC m=+23.371443405 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 03:48:18 crc kubenswrapper[4667]: I0131 03:48:18.222786 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 05:27:16.020025052 +0000 UTC Jan 31 03:48:18 crc kubenswrapper[4667]: I0131 03:48:18.281285 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:18 crc kubenswrapper[4667]: I0131 03:48:18.281327 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:18 crc kubenswrapper[4667]: I0131 03:48:18.281365 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:18 crc kubenswrapper[4667]: E0131 03:48:18.281420 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:18 crc kubenswrapper[4667]: E0131 03:48:18.281656 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:18 crc kubenswrapper[4667]: E0131 03:48:18.281887 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:19 crc kubenswrapper[4667]: I0131 03:48:19.223237 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 08:17:13.526022982 +0000 UTC Jan 31 03:48:19 crc kubenswrapper[4667]: I0131 03:48:19.416663 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"8a7daf4c78db3e0b9f6629c1ae75a3dad90a19d8f830bc4e3db8b48c852b3485"} Jan 31 03:48:19 crc kubenswrapper[4667]: I0131 03:48:19.437300 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:19Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:19 crc kubenswrapper[4667]: I0131 03:48:19.457793 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e8e5fbf5b62418d8b08ccaafaf9f565b19d0d1ab8dc1ad4151af14790cf4aa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:19Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:19 crc kubenswrapper[4667]: I0131 03:48:19.494182 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f495ddf-247c-4cac-979b-710342a770f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8127777e243fb5e93d9dd430fb28ccc91a340dfd6b4169ebac2f3167e5ea1660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ce78e24e1cbf1115918bbd93da300b4efa5434f21bf1a11669f702a894f64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b94e5ba5276aa39d01479c1eb697edafb939d0e62ec593eed1628e7735e95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e068f8011041fbb83af5bf15d9f856fb111b3fd48d3707507df895249b125646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586bfc35d3a6f331a069b76d004135156f1b13db4afcf14f1404cba6c4ec3627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:19Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:19 crc kubenswrapper[4667]: I0131 03:48:19.516737 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10ccda3-d9b2-4d01-897a-8498aee530b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 03:48:15.785649 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 03:48:15.786510 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:48:15.790183 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1119535395/tls.crt::/tmp/serving-cert-1119535395/tls.key\\\\\\\"\\\\nI0131 03:48:16.086916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 03:48:16.089052 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 03:48:16.089068 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 03:48:16.089086 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 03:48:16.089091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 03:48:16.097787 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 03:48:16.097804 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 03:48:16.097818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 03:48:16.097822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 03:48:16.097825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 03:48:16.098030 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 03:48:16.100791 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:19Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:19 crc kubenswrapper[4667]: I0131 03:48:19.531465 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:19Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:19 crc kubenswrapper[4667]: I0131 03:48:19.548635 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:19Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:19 crc kubenswrapper[4667]: I0131 03:48:19.566233 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7daf4c78db3e0b9f6629c1ae75a3dad90a19d8f830bc4e3db8b48c852b3485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:19Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:19 crc kubenswrapper[4667]: I0131 03:48:19.587224 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523e97dbbec93313d682bbe37cf3b8cf49936d91c8f60915bf1d8849bd53f4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1730e8905dbea5ca3056d2002abe78755bdca22f3fbd66a11bb6c000b2289945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:19Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:19 crc kubenswrapper[4667]: I0131 03:48:19.773373 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:48:19 crc kubenswrapper[4667]: E0131 03:48:19.773609 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:48:23.773576395 +0000 UTC m=+27.289911694 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:48:19 crc kubenswrapper[4667]: I0131 03:48:19.873867 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:19 crc kubenswrapper[4667]: I0131 03:48:19.873912 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:19 crc kubenswrapper[4667]: I0131 03:48:19.873942 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:19 crc kubenswrapper[4667]: I0131 03:48:19.873981 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:19 crc kubenswrapper[4667]: E0131 03:48:19.874074 4667 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 03:48:19 crc kubenswrapper[4667]: E0131 03:48:19.874122 4667 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 03:48:19 crc kubenswrapper[4667]: E0131 03:48:19.874145 4667 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 03:48:19 crc kubenswrapper[4667]: E0131 03:48:19.874204 4667 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 03:48:19 crc kubenswrapper[4667]: E0131 03:48:19.874224 4667 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:48:19 crc kubenswrapper[4667]: E0131 03:48:19.874155 4667 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 03:48:19 crc kubenswrapper[4667]: E0131 03:48:19.874288 4667 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:48:19 crc kubenswrapper[4667]: E0131 03:48:19.874137 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 03:48:23.874119966 +0000 UTC m=+27.390455265 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 03:48:19 crc kubenswrapper[4667]: E0131 03:48:19.874387 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 03:48:23.874360423 +0000 UTC m=+27.390695742 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:48:19 crc kubenswrapper[4667]: E0131 03:48:19.874434 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 03:48:23.874420785 +0000 UTC m=+27.390756184 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:48:19 crc kubenswrapper[4667]: E0131 03:48:19.874455 4667 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 03:48:19 crc kubenswrapper[4667]: E0131 03:48:19.874481 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 03:48:23.874473786 +0000 UTC m=+27.390809085 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 03:48:20 crc kubenswrapper[4667]: I0131 03:48:20.223566 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 23:27:17.227202732 +0000 UTC Jan 31 03:48:20 crc kubenswrapper[4667]: I0131 03:48:20.281512 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:20 crc kubenswrapper[4667]: I0131 03:48:20.281562 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:20 crc kubenswrapper[4667]: I0131 03:48:20.281593 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:20 crc kubenswrapper[4667]: E0131 03:48:20.281644 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:20 crc kubenswrapper[4667]: E0131 03:48:20.281963 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:20 crc kubenswrapper[4667]: E0131 03:48:20.281818 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:21 crc kubenswrapper[4667]: I0131 03:48:21.224600 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 07:25:50.288248665 +0000 UTC Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.160032 4667 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.162173 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.162244 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.162268 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.162449 4667 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.174318 4667 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.174737 4667 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.176729 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.176778 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.176788 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.176805 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.176816 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:22Z","lastTransitionTime":"2026-01-31T03:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:22 crc kubenswrapper[4667]: E0131 03:48:22.203766 4667 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b790e77-6566-44ce-a51f-ed9234cccb89\\\",\\\"systemUUID\\\":\\\"53d28e89-fb25-47fd-9db4-43074284604e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.208623 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.208689 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.208709 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.208743 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.208762 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:22Z","lastTransitionTime":"2026-01-31T03:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.224729 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 14:28:50.114145576 +0000 UTC Jan 31 03:48:22 crc kubenswrapper[4667]: E0131 03:48:22.228457 4667 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b790e77-6566-44ce-a51f-ed9234cccb89\\\",\\\"systemUUID\\\":\\\"53d28e89-fb25-47fd-9db4-43074284604e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.235431 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.235485 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.235493 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.235507 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.235517 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:22Z","lastTransitionTime":"2026-01-31T03:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:22 crc kubenswrapper[4667]: E0131 03:48:22.252096 4667 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b790e77-6566-44ce-a51f-ed9234cccb89\\\",\\\"systemUUID\\\":\\\"53d28e89-fb25-47fd-9db4-43074284604e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.257474 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.257519 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.257532 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.257550 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.257562 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:22Z","lastTransitionTime":"2026-01-31T03:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:22 crc kubenswrapper[4667]: E0131 03:48:22.272718 4667 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b790e77-6566-44ce-a51f-ed9234cccb89\\\",\\\"systemUUID\\\":\\\"53d28e89-fb25-47fd-9db4-43074284604e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.277380 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.277431 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.277448 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.277472 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.277489 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:22Z","lastTransitionTime":"2026-01-31T03:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.281524 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.281552 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.281542 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:22 crc kubenswrapper[4667]: E0131 03:48:22.281698 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:22 crc kubenswrapper[4667]: E0131 03:48:22.281868 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:22 crc kubenswrapper[4667]: E0131 03:48:22.281997 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:22 crc kubenswrapper[4667]: E0131 03:48:22.299371 4667 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b790e77-6566-44ce-a51f-ed9234cccb89\\\",\\\"systemUUID\\\":\\\"53d28e89-fb25-47fd-9db4-43074284604e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:22 crc kubenswrapper[4667]: E0131 03:48:22.299525 4667 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.301211 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.301271 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.301292 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.301317 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.301336 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:22Z","lastTransitionTime":"2026-01-31T03:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.385012 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.388786 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.394707 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.402961 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7daf4c78db3e0b9f6629c1ae75a3dad90a19d8f830bc4e3db8b48c852b3485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.404773 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.404811 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.404863 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.404878 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.404892 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:22Z","lastTransitionTime":"2026-01-31T03:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.418165 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523e97dbbec93313d682bbe37cf3b8cf49936d91c8f60915bf1d8849bd53f4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1730e8905dbea5ca3056d2002abe78755bdca22f3fbd66a11bb6c000b2289945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.442928 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f495ddf-247c-4cac-979b-710342a770f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8127777e243fb5e93d9dd430fb28ccc91a340dfd6b4169ebac2f3167e5ea1660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ce78e24e1cbf1115918bbd93da300b4efa5434f21bf1a11669f702a894f64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b94e5ba5276aa39d01479c1eb697edafb939d0e62ec593eed1628e7735e95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e068f8011041fbb83af5bf15d9f856fb111b3fd48d3707507df895249b125646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586bfc35d3a6f331a069b76d004135156f1b13db4afcf14f1404cba6c4ec3627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.458044 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10ccda3-d9b2-4d01-897a-8498aee530b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 03:48:15.785649 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 03:48:15.786510 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:48:15.790183 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1119535395/tls.crt::/tmp/serving-cert-1119535395/tls.key\\\\\\\"\\\\nI0131 03:48:16.086916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 03:48:16.089052 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 03:48:16.089068 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 03:48:16.089086 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 03:48:16.089091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 03:48:16.097787 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 03:48:16.097804 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 03:48:16.097818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 03:48:16.097822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 03:48:16.097825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 03:48:16.098030 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 03:48:16.100791 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.477083 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.498869 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.506861 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.506910 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.506923 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.506947 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.506982 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:22Z","lastTransitionTime":"2026-01-31T03:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.514812 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.541887 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e8e5fbf5b62418d8b08ccaafaf9f565b19d0d1ab8dc1ad4151af14790cf4aa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.555947 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.570981 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.598648 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7daf4c78db3e0b9f6629c1ae75a3dad90a19d8f830bc4e3db8b48c852b3485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.608939 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.609175 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.609240 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.609327 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.609391 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:22Z","lastTransitionTime":"2026-01-31T03:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.631115 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523e97dbbec93313d682bbe37cf3b8cf49936d91c8f60915bf1d8849bd53f4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1730e8905dbea5ca3056d2002abe78755bdca22f3fbd66a11bb6c000b2289945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.671380 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f495ddf-247c-4cac-979b-710342a770f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8127777e243fb5e93d9dd430fb28ccc91a340dfd6b4169ebac2f3167e5ea1660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ce78e24e1cbf1115918bbd93da300b4efa5434f21bf1a11669f702a894f64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b94e5ba5276aa39d01479c1eb697edafb939d0e62ec593eed1628e7735e95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e068f8011041fbb83af5bf15d9f856fb111b3fd48d3707507df895249b125646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586bfc35d3a6f331a069b76d004135156f1b13db4afcf14f1404cba6c4ec3627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.694795 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10ccda3-d9b2-4d01-897a-8498aee530b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 03:48:15.785649 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 03:48:15.786510 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:48:15.790183 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1119535395/tls.crt::/tmp/serving-cert-1119535395/tls.key\\\\\\\"\\\\nI0131 03:48:16.086916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 03:48:16.089052 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 03:48:16.089068 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 03:48:16.089086 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 03:48:16.089091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 03:48:16.097787 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 03:48:16.097804 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 03:48:16.097818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 03:48:16.097822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 03:48:16.097825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 03:48:16.098030 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 03:48:16.100791 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.711751 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.711799 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.711810 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.711829 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.711856 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:22Z","lastTransitionTime":"2026-01-31T03:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.737243 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af345b03-7933-405e-9918-4dfa4559aba8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572c8933d715b77d472cb5f4c1e3c78d3a5d9dd6857a061f4db5292274041429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93540db06524b42380aa14ebbb64ece6e98cf8104ccc5930d58ae980e41d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fad2057c1b38b9a7628137d033413b768ea2ff18e1ece27c3db4f9279009ad9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46df3e9a1466ef303cf6f7c703ee28b993ea1ad08bdc870c4298be0ba0804d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.780377 4667 csr.go:261] certificate signing request csr-8k5x8 is approved, waiting to be issued Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.782857 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.798015 4667 csr.go:257] certificate signing request csr-8k5x8 is issued Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.805362 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e8e5fbf5b62418d8b08ccaafaf9f565b19d0d1ab8dc1ad4151af14790cf4aa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:22Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.814075 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.814114 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.814123 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.814140 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.814150 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:22Z","lastTransitionTime":"2026-01-31T03:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.917252 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.917294 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.917305 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.917324 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:22 crc kubenswrapper[4667]: I0131 03:48:22.917347 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:22Z","lastTransitionTime":"2026-01-31T03:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.020824 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.020923 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.020935 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.020956 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.021311 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:23Z","lastTransitionTime":"2026-01-31T03:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.123656 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.123697 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.123707 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.123723 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.123732 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:23Z","lastTransitionTime":"2026-01-31T03:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.225016 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 11:26:58.110474937 +0000 UTC Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.226309 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.226353 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.226366 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.226380 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.226389 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:23Z","lastTransitionTime":"2026-01-31T03:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.328643 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.328679 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.328689 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.328706 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.328715 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:23Z","lastTransitionTime":"2026-01-31T03:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.430716 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.430758 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.430768 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.430784 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.430799 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:23Z","lastTransitionTime":"2026-01-31T03:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.533385 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.533456 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.533468 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.533489 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.533501 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:23Z","lastTransitionTime":"2026-01-31T03:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.560336 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-j9b7g"] Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.560784 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-ns977"] Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.561022 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.561172 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-cd764"] Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.561046 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-ns977" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.561351 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-zgr94"] Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.561913 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-zgr94" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.561910 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.565465 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.565689 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.567654 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.567690 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.567901 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.568002 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.568057 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.568071 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.568268 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.568422 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.569049 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.569442 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.569672 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.569914 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.571804 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.581898 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.597145 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e8e5fbf5b62418d8b08ccaafaf9f565b19d0d1ab8dc1ad4151af14790cf4aa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.613682 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b103bbd2-fb5d-4b2a-8b01-c32f699757df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9b7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.624813 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ns977" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57dcb541-6b8f-4730-9fd8-7ce27870e3a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccvwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ns977\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.635466 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.635508 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.635518 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.635536 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.635547 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:23Z","lastTransitionTime":"2026-01-31T03:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.645260 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f495ddf-247c-4cac-979b-710342a770f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8127777e243fb5e93d9dd430fb28ccc91a340dfd6b4169ebac2f3167e5ea1660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ce78e24e1cbf1115918bbd93da300b4efa5434f21bf1a11669f702a894f64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b94e5ba5276aa39d01479c1eb697edafb939d0e62ec593eed1628e7735e95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e068f8011041fbb83af5bf15d9f856fb111b3fd48d3707507df895249b125646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586bfc35d3a6f331a069b76d004135156f1b13db4afcf14f1404cba6c4ec3627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.662709 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10ccda3-d9b2-4d01-897a-8498aee530b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 03:48:15.785649 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 03:48:15.786510 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:48:15.790183 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1119535395/tls.crt::/tmp/serving-cert-1119535395/tls.key\\\\\\\"\\\\nI0131 03:48:16.086916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 03:48:16.089052 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 03:48:16.089068 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 03:48:16.089086 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 03:48:16.089091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 03:48:16.097787 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 03:48:16.097804 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 03:48:16.097818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 03:48:16.097822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 03:48:16.097825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 03:48:16.098030 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 03:48:16.100791 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.675977 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af345b03-7933-405e-9918-4dfa4559aba8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572c8933d715b77d472cb5f4c1e3c78d3a5d9dd6857a061f4db5292274041429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93540db06524b42380aa14ebbb64ece6e98cf8104ccc5930d58ae980e41d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fad2057c1b38b9a7628137d033413b768ea2ff18e1ece27c3db4f9279009ad9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46df3e9a1466ef303cf6f7c703ee28b993ea1ad08bdc870c4298be0ba0804d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.691596 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.702679 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.708876 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b069c8d1-f785-4509-8ee6-7d44525bdc89-etc-kubernetes\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.709052 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/57dcb541-6b8f-4730-9fd8-7ce27870e3a3-hosts-file\") pod \"node-resolver-ns977\" (UID: \"57dcb541-6b8f-4730-9fd8-7ce27870e3a3\") " pod="openshift-dns/node-resolver-ns977" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.709152 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/50870207-38dd-40d0-8a53-0eaa3af9d1fb-cnibin\") pod \"multus-additional-cni-plugins-zgr94\" (UID: \"50870207-38dd-40d0-8a53-0eaa3af9d1fb\") " pod="openshift-multus/multus-additional-cni-plugins-zgr94" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.709275 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8wnt\" (UniqueName: \"kubernetes.io/projected/b069c8d1-f785-4509-8ee6-7d44525bdc89-kube-api-access-n8wnt\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.709313 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b069c8d1-f785-4509-8ee6-7d44525bdc89-os-release\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.709429 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/50870207-38dd-40d0-8a53-0eaa3af9d1fb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-zgr94\" (UID: \"50870207-38dd-40d0-8a53-0eaa3af9d1fb\") " pod="openshift-multus/multus-additional-cni-plugins-zgr94" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.709489 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b069c8d1-f785-4509-8ee6-7d44525bdc89-multus-daemon-config\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.709512 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b069c8d1-f785-4509-8ee6-7d44525bdc89-host-run-multus-certs\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.709530 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/50870207-38dd-40d0-8a53-0eaa3af9d1fb-cni-binary-copy\") pod \"multus-additional-cni-plugins-zgr94\" (UID: \"50870207-38dd-40d0-8a53-0eaa3af9d1fb\") " pod="openshift-multus/multus-additional-cni-plugins-zgr94" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.709549 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b103bbd2-fb5d-4b2a-8b01-c32f699757df-proxy-tls\") pod \"machine-config-daemon-j9b7g\" (UID: \"b103bbd2-fb5d-4b2a-8b01-c32f699757df\") " pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.709589 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b069c8d1-f785-4509-8ee6-7d44525bdc89-cnibin\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.709610 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b069c8d1-f785-4509-8ee6-7d44525bdc89-host-var-lib-cni-bin\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.709631 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b069c8d1-f785-4509-8ee6-7d44525bdc89-multus-conf-dir\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.709666 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfwcb\" (UniqueName: \"kubernetes.io/projected/b103bbd2-fb5d-4b2a-8b01-c32f699757df-kube-api-access-zfwcb\") pod \"machine-config-daemon-j9b7g\" (UID: \"b103bbd2-fb5d-4b2a-8b01-c32f699757df\") " pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.709710 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b069c8d1-f785-4509-8ee6-7d44525bdc89-hostroot\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.709765 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b069c8d1-f785-4509-8ee6-7d44525bdc89-multus-cni-dir\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.709788 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/50870207-38dd-40d0-8a53-0eaa3af9d1fb-os-release\") pod \"multus-additional-cni-plugins-zgr94\" (UID: \"50870207-38dd-40d0-8a53-0eaa3af9d1fb\") " pod="openshift-multus/multus-additional-cni-plugins-zgr94" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.709805 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/50870207-38dd-40d0-8a53-0eaa3af9d1fb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-zgr94\" (UID: \"50870207-38dd-40d0-8a53-0eaa3af9d1fb\") " pod="openshift-multus/multus-additional-cni-plugins-zgr94" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.709821 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b103bbd2-fb5d-4b2a-8b01-c32f699757df-rootfs\") pod \"machine-config-daemon-j9b7g\" (UID: \"b103bbd2-fb5d-4b2a-8b01-c32f699757df\") " pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.709852 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b069c8d1-f785-4509-8ee6-7d44525bdc89-host-var-lib-kubelet\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.709876 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b069c8d1-f785-4509-8ee6-7d44525bdc89-host-var-lib-cni-multus\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.709896 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b069c8d1-f785-4509-8ee6-7d44525bdc89-multus-socket-dir-parent\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.709916 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccvwd\" (UniqueName: \"kubernetes.io/projected/57dcb541-6b8f-4730-9fd8-7ce27870e3a3-kube-api-access-ccvwd\") pod \"node-resolver-ns977\" (UID: \"57dcb541-6b8f-4730-9fd8-7ce27870e3a3\") " pod="openshift-dns/node-resolver-ns977" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.709953 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/50870207-38dd-40d0-8a53-0eaa3af9d1fb-system-cni-dir\") pod \"multus-additional-cni-plugins-zgr94\" (UID: \"50870207-38dd-40d0-8a53-0eaa3af9d1fb\") " pod="openshift-multus/multus-additional-cni-plugins-zgr94" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.709980 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b069c8d1-f785-4509-8ee6-7d44525bdc89-host-run-netns\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.709996 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b069c8d1-f785-4509-8ee6-7d44525bdc89-cni-binary-copy\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.710029 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b069c8d1-f785-4509-8ee6-7d44525bdc89-host-run-k8s-cni-cncf-io\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.710208 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tszz9\" (UniqueName: \"kubernetes.io/projected/50870207-38dd-40d0-8a53-0eaa3af9d1fb-kube-api-access-tszz9\") pod \"multus-additional-cni-plugins-zgr94\" (UID: \"50870207-38dd-40d0-8a53-0eaa3af9d1fb\") " pod="openshift-multus/multus-additional-cni-plugins-zgr94" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.710282 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b103bbd2-fb5d-4b2a-8b01-c32f699757df-mcd-auth-proxy-config\") pod \"machine-config-daemon-j9b7g\" (UID: \"b103bbd2-fb5d-4b2a-8b01-c32f699757df\") " pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.710351 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b069c8d1-f785-4509-8ee6-7d44525bdc89-system-cni-dir\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.715005 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7daf4c78db3e0b9f6629c1ae75a3dad90a19d8f830bc4e3db8b48c852b3485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.727191 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523e97dbbec93313d682bbe37cf3b8cf49936d91c8f60915bf1d8849bd53f4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1730e8905dbea5ca3056d2002abe78755bdca22f3fbd66a11bb6c000b2289945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.738494 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.738532 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.738542 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.738567 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.738576 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:23Z","lastTransitionTime":"2026-01-31T03:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.739144 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af345b03-7933-405e-9918-4dfa4559aba8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572c8933d715b77d472cb5f4c1e3c78d3a5d9dd6857a061f4db5292274041429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93540db06524b42380aa14ebbb64ece6e98cf8104ccc5930d58ae980e41d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fad2057c1b38b9a7628137d033413b768ea2ff18e1ece27c3db4f9279009ad9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46df3e9a1466ef303cf6f7c703ee28b993ea1ad08bdc870c4298be0ba0804d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.756100 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f495ddf-247c-4cac-979b-710342a770f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8127777e243fb5e93d9dd430fb28ccc91a340dfd6b4169ebac2f3167e5ea1660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ce78e24e1cbf1115918bbd93da300b4efa5434f21bf1a11669f702a894f64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b94e5ba5276aa39d01479c1eb697edafb939d0e62ec593eed1628e7735e95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e068f8011041fbb83af5bf15d9f856fb111b3fd48d3707507df895249b125646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586bfc35d3a6f331a069b76d004135156f1b13db4afcf14f1404cba6c4ec3627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.768506 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e8e5fbf5b62418d8b08ccaafaf9f565b19d0d1ab8dc1ad4151af14790cf4aa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.779154 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.791816 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.799212 4667 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-31 03:43:22 +0000 UTC, rotation deadline is 2026-10-16 10:29:32.335397955 +0000 UTC Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.799248 4667 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6198h41m8.536152957s for next certificate rotation Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.802803 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7daf4c78db3e0b9f6629c1ae75a3dad90a19d8f830bc4e3db8b48c852b3485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.811044 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.811144 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b069c8d1-f785-4509-8ee6-7d44525bdc89-system-cni-dir\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: E0131 03:48:23.811181 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:48:31.811160385 +0000 UTC m=+35.327495704 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.811203 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tszz9\" (UniqueName: \"kubernetes.io/projected/50870207-38dd-40d0-8a53-0eaa3af9d1fb-kube-api-access-tszz9\") pod \"multus-additional-cni-plugins-zgr94\" (UID: \"50870207-38dd-40d0-8a53-0eaa3af9d1fb\") " pod="openshift-multus/multus-additional-cni-plugins-zgr94" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.811230 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b103bbd2-fb5d-4b2a-8b01-c32f699757df-mcd-auth-proxy-config\") pod \"machine-config-daemon-j9b7g\" (UID: \"b103bbd2-fb5d-4b2a-8b01-c32f699757df\") " pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.811271 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b069c8d1-f785-4509-8ee6-7d44525bdc89-etc-kubernetes\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.811294 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/57dcb541-6b8f-4730-9fd8-7ce27870e3a3-hosts-file\") pod \"node-resolver-ns977\" (UID: \"57dcb541-6b8f-4730-9fd8-7ce27870e3a3\") " pod="openshift-dns/node-resolver-ns977" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.811328 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b069c8d1-f785-4509-8ee6-7d44525bdc89-system-cni-dir\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.811336 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8wnt\" (UniqueName: \"kubernetes.io/projected/b069c8d1-f785-4509-8ee6-7d44525bdc89-kube-api-access-n8wnt\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.811362 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/50870207-38dd-40d0-8a53-0eaa3af9d1fb-cnibin\") pod \"multus-additional-cni-plugins-zgr94\" (UID: \"50870207-38dd-40d0-8a53-0eaa3af9d1fb\") " pod="openshift-multus/multus-additional-cni-plugins-zgr94" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.811387 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b069c8d1-f785-4509-8ee6-7d44525bdc89-os-release\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.811407 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/50870207-38dd-40d0-8a53-0eaa3af9d1fb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-zgr94\" (UID: \"50870207-38dd-40d0-8a53-0eaa3af9d1fb\") " pod="openshift-multus/multus-additional-cni-plugins-zgr94" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.811428 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b069c8d1-f785-4509-8ee6-7d44525bdc89-etc-kubernetes\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.811483 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b069c8d1-f785-4509-8ee6-7d44525bdc89-os-release\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.811520 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/50870207-38dd-40d0-8a53-0eaa3af9d1fb-cnibin\") pod \"multus-additional-cni-plugins-zgr94\" (UID: \"50870207-38dd-40d0-8a53-0eaa3af9d1fb\") " pod="openshift-multus/multus-additional-cni-plugins-zgr94" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.811520 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/57dcb541-6b8f-4730-9fd8-7ce27870e3a3-hosts-file\") pod \"node-resolver-ns977\" (UID: \"57dcb541-6b8f-4730-9fd8-7ce27870e3a3\") " pod="openshift-dns/node-resolver-ns977" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.811548 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b069c8d1-f785-4509-8ee6-7d44525bdc89-multus-conf-dir\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.811660 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b069c8d1-f785-4509-8ee6-7d44525bdc89-multus-conf-dir\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.811674 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b069c8d1-f785-4509-8ee6-7d44525bdc89-multus-daemon-config\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.811711 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b069c8d1-f785-4509-8ee6-7d44525bdc89-host-run-multus-certs\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.811743 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/50870207-38dd-40d0-8a53-0eaa3af9d1fb-cni-binary-copy\") pod \"multus-additional-cni-plugins-zgr94\" (UID: \"50870207-38dd-40d0-8a53-0eaa3af9d1fb\") " pod="openshift-multus/multus-additional-cni-plugins-zgr94" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.811768 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b103bbd2-fb5d-4b2a-8b01-c32f699757df-proxy-tls\") pod \"machine-config-daemon-j9b7g\" (UID: \"b103bbd2-fb5d-4b2a-8b01-c32f699757df\") " pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.811745 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b069c8d1-f785-4509-8ee6-7d44525bdc89-host-run-multus-certs\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.811807 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b069c8d1-f785-4509-8ee6-7d44525bdc89-cnibin\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.811833 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b069c8d1-f785-4509-8ee6-7d44525bdc89-host-var-lib-cni-bin\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.811876 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfwcb\" (UniqueName: \"kubernetes.io/projected/b103bbd2-fb5d-4b2a-8b01-c32f699757df-kube-api-access-zfwcb\") pod \"machine-config-daemon-j9b7g\" (UID: \"b103bbd2-fb5d-4b2a-8b01-c32f699757df\") " pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.811896 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b069c8d1-f785-4509-8ee6-7d44525bdc89-cnibin\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.811900 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b069c8d1-f785-4509-8ee6-7d44525bdc89-host-var-lib-cni-bin\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.811900 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b069c8d1-f785-4509-8ee6-7d44525bdc89-multus-cni-dir\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.811953 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b069c8d1-f785-4509-8ee6-7d44525bdc89-hostroot\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.811980 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b069c8d1-f785-4509-8ee6-7d44525bdc89-hostroot\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.811981 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/50870207-38dd-40d0-8a53-0eaa3af9d1fb-os-release\") pod \"multus-additional-cni-plugins-zgr94\" (UID: \"50870207-38dd-40d0-8a53-0eaa3af9d1fb\") " pod="openshift-multus/multus-additional-cni-plugins-zgr94" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.811954 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b069c8d1-f785-4509-8ee6-7d44525bdc89-multus-cni-dir\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.812023 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/50870207-38dd-40d0-8a53-0eaa3af9d1fb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-zgr94\" (UID: \"50870207-38dd-40d0-8a53-0eaa3af9d1fb\") " pod="openshift-multus/multus-additional-cni-plugins-zgr94" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.812101 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b103bbd2-fb5d-4b2a-8b01-c32f699757df-rootfs\") pod \"machine-config-daemon-j9b7g\" (UID: \"b103bbd2-fb5d-4b2a-8b01-c32f699757df\") " pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.812021 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/50870207-38dd-40d0-8a53-0eaa3af9d1fb-os-release\") pod \"multus-additional-cni-plugins-zgr94\" (UID: \"50870207-38dd-40d0-8a53-0eaa3af9d1fb\") " pod="openshift-multus/multus-additional-cni-plugins-zgr94" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.812129 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b069c8d1-f785-4509-8ee6-7d44525bdc89-host-var-lib-cni-multus\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.812152 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b103bbd2-fb5d-4b2a-8b01-c32f699757df-mcd-auth-proxy-config\") pod \"machine-config-daemon-j9b7g\" (UID: \"b103bbd2-fb5d-4b2a-8b01-c32f699757df\") " pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.812165 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b103bbd2-fb5d-4b2a-8b01-c32f699757df-rootfs\") pod \"machine-config-daemon-j9b7g\" (UID: \"b103bbd2-fb5d-4b2a-8b01-c32f699757df\") " pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.812155 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b069c8d1-f785-4509-8ee6-7d44525bdc89-host-var-lib-kubelet\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.812184 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b069c8d1-f785-4509-8ee6-7d44525bdc89-host-var-lib-kubelet\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.812224 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b069c8d1-f785-4509-8ee6-7d44525bdc89-multus-socket-dir-parent\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.812281 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b069c8d1-f785-4509-8ee6-7d44525bdc89-multus-socket-dir-parent\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.812287 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b069c8d1-f785-4509-8ee6-7d44525bdc89-host-var-lib-cni-multus\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.812310 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccvwd\" (UniqueName: \"kubernetes.io/projected/57dcb541-6b8f-4730-9fd8-7ce27870e3a3-kube-api-access-ccvwd\") pod \"node-resolver-ns977\" (UID: \"57dcb541-6b8f-4730-9fd8-7ce27870e3a3\") " pod="openshift-dns/node-resolver-ns977" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.812307 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/50870207-38dd-40d0-8a53-0eaa3af9d1fb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-zgr94\" (UID: \"50870207-38dd-40d0-8a53-0eaa3af9d1fb\") " pod="openshift-multus/multus-additional-cni-plugins-zgr94" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.812354 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/50870207-38dd-40d0-8a53-0eaa3af9d1fb-system-cni-dir\") pod \"multus-additional-cni-plugins-zgr94\" (UID: \"50870207-38dd-40d0-8a53-0eaa3af9d1fb\") " pod="openshift-multus/multus-additional-cni-plugins-zgr94" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.812444 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b069c8d1-f785-4509-8ee6-7d44525bdc89-host-run-netns\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.812490 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b069c8d1-f785-4509-8ee6-7d44525bdc89-cni-binary-copy\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.812517 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b069c8d1-f785-4509-8ee6-7d44525bdc89-host-run-k8s-cni-cncf-io\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.812552 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/50870207-38dd-40d0-8a53-0eaa3af9d1fb-system-cni-dir\") pod \"multus-additional-cni-plugins-zgr94\" (UID: \"50870207-38dd-40d0-8a53-0eaa3af9d1fb\") " pod="openshift-multus/multus-additional-cni-plugins-zgr94" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.812580 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/50870207-38dd-40d0-8a53-0eaa3af9d1fb-cni-binary-copy\") pod \"multus-additional-cni-plugins-zgr94\" (UID: \"50870207-38dd-40d0-8a53-0eaa3af9d1fb\") " pod="openshift-multus/multus-additional-cni-plugins-zgr94" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.812600 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b069c8d1-f785-4509-8ee6-7d44525bdc89-host-run-k8s-cni-cncf-io\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.812611 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b069c8d1-f785-4509-8ee6-7d44525bdc89-host-run-netns\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.812881 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/50870207-38dd-40d0-8a53-0eaa3af9d1fb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-zgr94\" (UID: \"50870207-38dd-40d0-8a53-0eaa3af9d1fb\") " pod="openshift-multus/multus-additional-cni-plugins-zgr94" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.813044 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b069c8d1-f785-4509-8ee6-7d44525bdc89-multus-daemon-config\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.813218 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b069c8d1-f785-4509-8ee6-7d44525bdc89-cni-binary-copy\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.817679 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b103bbd2-fb5d-4b2a-8b01-c32f699757df-proxy-tls\") pod \"machine-config-daemon-j9b7g\" (UID: \"b103bbd2-fb5d-4b2a-8b01-c32f699757df\") " pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.818794 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523e97dbbec93313d682bbe37cf3b8cf49936d91c8f60915bf1d8849bd53f4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1730e8905dbea5ca3056d2002abe78755bdca22f3fbd66a11bb6c000b2289945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.830909 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tszz9\" (UniqueName: \"kubernetes.io/projected/50870207-38dd-40d0-8a53-0eaa3af9d1fb-kube-api-access-tszz9\") pod \"multus-additional-cni-plugins-zgr94\" (UID: \"50870207-38dd-40d0-8a53-0eaa3af9d1fb\") " pod="openshift-multus/multus-additional-cni-plugins-zgr94" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.831488 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfwcb\" (UniqueName: \"kubernetes.io/projected/b103bbd2-fb5d-4b2a-8b01-c32f699757df-kube-api-access-zfwcb\") pod \"machine-config-daemon-j9b7g\" (UID: \"b103bbd2-fb5d-4b2a-8b01-c32f699757df\") " pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.834877 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8wnt\" (UniqueName: \"kubernetes.io/projected/b069c8d1-f785-4509-8ee6-7d44525bdc89-kube-api-access-n8wnt\") pod \"multus-cd764\" (UID: \"b069c8d1-f785-4509-8ee6-7d44525bdc89\") " pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.835196 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccvwd\" (UniqueName: \"kubernetes.io/projected/57dcb541-6b8f-4730-9fd8-7ce27870e3a3-kube-api-access-ccvwd\") pod \"node-resolver-ns977\" (UID: \"57dcb541-6b8f-4730-9fd8-7ce27870e3a3\") " pod="openshift-dns/node-resolver-ns977" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.836364 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50870207-38dd-40d0-8a53-0eaa3af9d1fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.841896 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.841941 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.841961 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.841980 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.841992 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:23Z","lastTransitionTime":"2026-01-31T03:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.859779 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10ccda3-d9b2-4d01-897a-8498aee530b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 03:48:15.785649 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 03:48:15.786510 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:48:15.790183 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1119535395/tls.crt::/tmp/serving-cert-1119535395/tls.key\\\\\\\"\\\\nI0131 03:48:16.086916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 03:48:16.089052 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 03:48:16.089068 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 03:48:16.089086 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 03:48:16.089091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 03:48:16.097787 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 03:48:16.097804 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 03:48:16.097818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 03:48:16.097822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 03:48:16.097825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 03:48:16.098030 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 03:48:16.100791 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.874889 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.883197 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b103bbd2-fb5d-4b2a-8b01-c32f699757df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9b7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.888303 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-ns977" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.897049 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-zgr94" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.903089 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-cd764" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.907690 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ns977" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57dcb541-6b8f-4730-9fd8-7ce27870e3a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccvwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ns977\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.915302 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.915340 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.915369 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.915398 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:23 crc kubenswrapper[4667]: E0131 03:48:23.915544 4667 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 03:48:23 crc kubenswrapper[4667]: E0131 03:48:23.915565 4667 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 03:48:23 crc kubenswrapper[4667]: E0131 03:48:23.915580 4667 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 03:48:23 crc kubenswrapper[4667]: E0131 03:48:23.915596 4667 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 03:48:23 crc kubenswrapper[4667]: E0131 03:48:23.915612 4667 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:48:23 crc kubenswrapper[4667]: E0131 03:48:23.915636 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 03:48:31.915613591 +0000 UTC m=+35.431948890 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 03:48:23 crc kubenswrapper[4667]: E0131 03:48:23.915672 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 03:48:31.915649302 +0000 UTC m=+35.431984601 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 03:48:23 crc kubenswrapper[4667]: E0131 03:48:23.915694 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 03:48:31.915687983 +0000 UTC m=+35.432023282 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:48:23 crc kubenswrapper[4667]: E0131 03:48:23.915770 4667 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 03:48:23 crc kubenswrapper[4667]: E0131 03:48:23.915786 4667 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 03:48:23 crc kubenswrapper[4667]: E0131 03:48:23.915817 4667 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:48:23 crc kubenswrapper[4667]: E0131 03:48:23.925287 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 03:48:31.925252678 +0000 UTC m=+35.441587977 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.938199 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd764" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b069c8d1-f785-4509-8ee6-7d44525bdc89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n8wnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd764\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.948789 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.948829 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.948840 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.948877 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.948890 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:23Z","lastTransitionTime":"2026-01-31T03:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.954174 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jhj5n"] Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.955182 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.955597 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.960073 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.960282 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.960453 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.965236 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.965449 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.965560 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.965683 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 31 03:48:23 crc kubenswrapper[4667]: I0131 03:48:23.984448 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e8e5fbf5b62418d8b08ccaafaf9f565b19d0d1ab8dc1ad4151af14790cf4aa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.001784 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.013133 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7daf4c78db3e0b9f6629c1ae75a3dad90a19d8f830bc4e3db8b48c852b3485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.026716 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-var-lib-openvswitch\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.026759 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-node-log\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.026777 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-log-socket\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.026795 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-host-slash\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.026822 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-ovnkube-config\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.026836 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-env-overrides\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.026879 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-host-cni-bin\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.026905 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmls5\" (UniqueName: \"kubernetes.io/projected/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-kube-api-access-dmls5\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.026923 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-run-openvswitch\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.026939 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-run-ovn\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.026954 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-host-cni-netd\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.026971 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-ovnkube-script-lib\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.026987 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-ovn-node-metrics-cert\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.027003 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-host-kubelet\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.027018 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-etc-openvswitch\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.027044 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-systemd-units\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.027060 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.027078 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-run-systemd\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.027094 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-host-run-ovn-kubernetes\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.027110 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-host-run-netns\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.027491 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523e97dbbec93313d682bbe37cf3b8cf49936d91c8f60915bf1d8849bd53f4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1730e8905dbea5ca3056d2002abe78755bdca22f3fbd66a11bb6c000b2289945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.045883 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50870207-38dd-40d0-8a53-0eaa3af9d1fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.057593 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.057632 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.057642 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.057661 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.057673 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:24Z","lastTransitionTime":"2026-01-31T03:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.063540 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10ccda3-d9b2-4d01-897a-8498aee530b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 03:48:15.785649 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 03:48:15.786510 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:48:15.790183 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1119535395/tls.crt::/tmp/serving-cert-1119535395/tls.key\\\\\\\"\\\\nI0131 03:48:16.086916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 03:48:16.089052 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 03:48:16.089068 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 03:48:16.089086 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 03:48:16.089091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 03:48:16.097787 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 03:48:16.097804 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 03:48:16.097818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 03:48:16.097822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 03:48:16.097825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 03:48:16.098030 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 03:48:16.100791 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.079415 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.094649 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ns977" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57dcb541-6b8f-4730-9fd8-7ce27870e3a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccvwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ns977\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.107531 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd764" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b069c8d1-f785-4509-8ee6-7d44525bdc89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n8wnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd764\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.123851 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.127458 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-var-lib-openvswitch\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.127598 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-node-log\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.127695 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-log-socket\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.127769 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-log-socket\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.127620 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-var-lib-openvswitch\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.127649 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-node-log\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.127941 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-host-slash\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.128031 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-host-slash\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.128078 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-ovnkube-config\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.128121 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-host-cni-bin\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.128148 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-env-overrides\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.128192 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmls5\" (UniqueName: \"kubernetes.io/projected/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-kube-api-access-dmls5\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.128213 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-run-openvswitch\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.128238 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-run-ovn\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.128258 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-host-cni-netd\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.128278 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-ovnkube-script-lib\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.128301 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-ovn-node-metrics-cert\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.128322 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-host-kubelet\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.128342 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-etc-openvswitch\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.128371 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.128416 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-systemd-units\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.128441 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-run-systemd\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.128462 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-host-run-ovn-kubernetes\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.128499 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-host-run-netns\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.128566 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-host-run-netns\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.128783 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-env-overrides\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.128794 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-run-openvswitch\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.128869 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-run-ovn\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.128800 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-ovnkube-config\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.128903 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-host-cni-netd\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.128923 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-run-systemd\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.128944 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-host-run-ovn-kubernetes\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.128957 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.128870 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-etc-openvswitch\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.128981 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-systemd-units\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.129172 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-host-kubelet\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.129328 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-ovnkube-script-lib\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.129359 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-host-cni-bin\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.133063 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-ovn-node-metrics-cert\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.142577 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b103bbd2-fb5d-4b2a-8b01-c32f699757df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9b7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.146097 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmls5\" (UniqueName: \"kubernetes.io/projected/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-kube-api-access-dmls5\") pod \"ovnkube-node-jhj5n\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.160489 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.160526 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.160535 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.160553 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.160568 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:24Z","lastTransitionTime":"2026-01-31T03:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.160889 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhj5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.184039 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f495ddf-247c-4cac-979b-710342a770f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8127777e243fb5e93d9dd430fb28ccc91a340dfd6b4169ebac2f3167e5ea1660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ce78e24e1cbf1115918bbd93da300b4efa5434f21bf1a11669f702a894f64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b94e5ba5276aa39d01479c1eb697edafb939d0e62ec593eed1628e7735e95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e068f8011041fbb83af5bf15d9f856fb111b3fd48d3707507df895249b125646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586bfc35d3a6f331a069b76d004135156f1b13db4afcf14f1404cba6c4ec3627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.207538 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af345b03-7933-405e-9918-4dfa4559aba8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572c8933d715b77d472cb5f4c1e3c78d3a5d9dd6857a061f4db5292274041429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93540db06524b42380aa14ebbb64ece6e98cf8104ccc5930d58ae980e41d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fad2057c1b38b9a7628137d033413b768ea2ff18e1ece27c3db4f9279009ad9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46df3e9a1466ef303cf6f7c703ee28b993ea1ad08bdc870c4298be0ba0804d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.226285 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 07:47:59.514050473 +0000 UTC Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.263905 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.264179 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.264264 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.264352 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.264446 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:24Z","lastTransitionTime":"2026-01-31T03:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.270444 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.280882 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.280927 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.280987 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:24 crc kubenswrapper[4667]: E0131 03:48:24.281018 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:24 crc kubenswrapper[4667]: E0131 03:48:24.281224 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:24 crc kubenswrapper[4667]: E0131 03:48:24.281340 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:24 crc kubenswrapper[4667]: W0131 03:48:24.282758 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d685ba5_5ff5_4e74_8d02_99a233fc6c9b.slice/crio-61ef93122274eda5482aad4c31f67d4e8d20c68d3174cfcb2086cd35574e727e WatchSource:0}: Error finding container 61ef93122274eda5482aad4c31f67d4e8d20c68d3174cfcb2086cd35574e727e: Status 404 returned error can't find the container with id 61ef93122274eda5482aad4c31f67d4e8d20c68d3174cfcb2086cd35574e727e Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.368253 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.368307 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.368319 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.368338 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.368348 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:24Z","lastTransitionTime":"2026-01-31T03:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.434426 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-ns977" event={"ID":"57dcb541-6b8f-4730-9fd8-7ce27870e3a3","Type":"ContainerStarted","Data":"559ff674832b9bb990309a535c9afb11a4f629b263495bc86311c24730b1a8f6"} Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.434474 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-ns977" event={"ID":"57dcb541-6b8f-4730-9fd8-7ce27870e3a3","Type":"ContainerStarted","Data":"3efb985506e105d1c0cd34a0778919df7f632bd859d3517c88d0df9da9b84f91"} Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.436299 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zgr94" event={"ID":"50870207-38dd-40d0-8a53-0eaa3af9d1fb","Type":"ContainerStarted","Data":"f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048"} Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.436326 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zgr94" event={"ID":"50870207-38dd-40d0-8a53-0eaa3af9d1fb","Type":"ContainerStarted","Data":"5ddf0e3ef8bfd98697c849a933967a80339babb67f11c782ced3561c85b1681c"} Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.439034 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" event={"ID":"b103bbd2-fb5d-4b2a-8b01-c32f699757df","Type":"ContainerStarted","Data":"db9ff867bc008c324ad624ff71dcbf4f93b48146483c828ce43d1c10de40b0ef"} Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.439094 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" event={"ID":"b103bbd2-fb5d-4b2a-8b01-c32f699757df","Type":"ContainerStarted","Data":"298f76d02f4ede118feca9fc2d4c9c073e2331174dcf673208ed96478b74232d"} Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.439110 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" event={"ID":"b103bbd2-fb5d-4b2a-8b01-c32f699757df","Type":"ContainerStarted","Data":"e5916196fa9e7ba6164bbb390348c92e94fdb4327695943bb78762b23d06b7b7"} Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.441163 4667 generic.go:334] "Generic (PLEG): container finished" podID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerID="3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9" exitCode=0 Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.441199 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" event={"ID":"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b","Type":"ContainerDied","Data":"3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9"} Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.441231 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" event={"ID":"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b","Type":"ContainerStarted","Data":"61ef93122274eda5482aad4c31f67d4e8d20c68d3174cfcb2086cd35574e727e"} Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.443183 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cd764" event={"ID":"b069c8d1-f785-4509-8ee6-7d44525bdc89","Type":"ContainerStarted","Data":"3014a6072d180863fd8be274b221dc47c9cd792188b8bc80621db1892ffdf64a"} Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.443217 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cd764" event={"ID":"b069c8d1-f785-4509-8ee6-7d44525bdc89","Type":"ContainerStarted","Data":"87662a6b414a8f971a4107cb1803beb9e1091dda5f7ca0753cc5af0956cab67e"} Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.458975 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10ccda3-d9b2-4d01-897a-8498aee530b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 03:48:15.785649 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 03:48:15.786510 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:48:15.790183 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1119535395/tls.crt::/tmp/serving-cert-1119535395/tls.key\\\\\\\"\\\\nI0131 03:48:16.086916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 03:48:16.089052 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 03:48:16.089068 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 03:48:16.089086 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 03:48:16.089091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 03:48:16.097787 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 03:48:16.097804 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 03:48:16.097818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 03:48:16.097822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 03:48:16.097825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 03:48:16.098030 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 03:48:16.100791 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.471090 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.471128 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.471139 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.471158 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.471170 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:24Z","lastTransitionTime":"2026-01-31T03:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.475584 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.491326 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.503614 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7daf4c78db3e0b9f6629c1ae75a3dad90a19d8f830bc4e3db8b48c852b3485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.518066 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523e97dbbec93313d682bbe37cf3b8cf49936d91c8f60915bf1d8849bd53f4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1730e8905dbea5ca3056d2002abe78755bdca22f3fbd66a11bb6c000b2289945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.533242 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50870207-38dd-40d0-8a53-0eaa3af9d1fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.548398 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.562289 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b103bbd2-fb5d-4b2a-8b01-c32f699757df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9b7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.573811 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.573888 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.573899 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.573918 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.573932 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:24Z","lastTransitionTime":"2026-01-31T03:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.577746 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ns977" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57dcb541-6b8f-4730-9fd8-7ce27870e3a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559ff674832b9bb990309a535c9afb11a4f629b263495bc86311c24730b1a8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccvwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ns977\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.600317 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd764" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b069c8d1-f785-4509-8ee6-7d44525bdc89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n8wnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd764\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.621394 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f495ddf-247c-4cac-979b-710342a770f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8127777e243fb5e93d9dd430fb28ccc91a340dfd6b4169ebac2f3167e5ea1660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ce78e24e1cbf1115918bbd93da300b4efa5434f21bf1a11669f702a894f64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b94e5ba5276aa39d01479c1eb697edafb939d0e62ec593eed1628e7735e95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e068f8011041fbb83af5bf15d9f856fb111b3fd48d3707507df895249b125646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586bfc35d3a6f331a069b76d004135156f1b13db4afcf14f1404cba6c4ec3627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.639165 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af345b03-7933-405e-9918-4dfa4559aba8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572c8933d715b77d472cb5f4c1e3c78d3a5d9dd6857a061f4db5292274041429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93540db06524b42380aa14ebbb64ece6e98cf8104ccc5930d58ae980e41d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fad2057c1b38b9a7628137d033413b768ea2ff18e1ece27c3db4f9279009ad9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46df3e9a1466ef303cf6f7c703ee28b993ea1ad08bdc870c4298be0ba0804d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.656878 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhj5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.671042 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e8e5fbf5b62418d8b08ccaafaf9f565b19d0d1ab8dc1ad4151af14790cf4aa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.675600 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.675632 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.675641 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.675659 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.675671 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:24Z","lastTransitionTime":"2026-01-31T03:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.685315 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e8e5fbf5b62418d8b08ccaafaf9f565b19d0d1ab8dc1ad4151af14790cf4aa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.699297 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523e97dbbec93313d682bbe37cf3b8cf49936d91c8f60915bf1d8849bd53f4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1730e8905dbea5ca3056d2002abe78755bdca22f3fbd66a11bb6c000b2289945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.713534 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50870207-38dd-40d0-8a53-0eaa3af9d1fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.733127 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10ccda3-d9b2-4d01-897a-8498aee530b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 03:48:15.785649 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 03:48:15.786510 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:48:15.790183 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1119535395/tls.crt::/tmp/serving-cert-1119535395/tls.key\\\\\\\"\\\\nI0131 03:48:16.086916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 03:48:16.089052 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 03:48:16.089068 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 03:48:16.089086 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 03:48:16.089091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 03:48:16.097787 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 03:48:16.097804 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 03:48:16.097818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 03:48:16.097822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 03:48:16.097825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 03:48:16.098030 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 03:48:16.100791 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.747323 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.758901 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.771199 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7daf4c78db3e0b9f6629c1ae75a3dad90a19d8f830bc4e3db8b48c852b3485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.780341 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.780390 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.780401 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.780420 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.780431 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:24Z","lastTransitionTime":"2026-01-31T03:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.785944 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.799828 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b103bbd2-fb5d-4b2a-8b01-c32f699757df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9ff867bc008c324ad624ff71dcbf4f93b48146483c828ce43d1c10de40b0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://298f76d02f4ede118feca9fc2d4c9c073e2331174dcf673208ed96478b74232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9b7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.809649 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ns977" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57dcb541-6b8f-4730-9fd8-7ce27870e3a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559ff674832b9bb990309a535c9afb11a4f629b263495bc86311c24730b1a8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccvwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ns977\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.837286 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd764" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b069c8d1-f785-4509-8ee6-7d44525bdc89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3014a6072d180863fd8be274b221dc47c9cd792188b8bc80621db1892ffdf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n8wnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd764\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.859384 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f495ddf-247c-4cac-979b-710342a770f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8127777e243fb5e93d9dd430fb28ccc91a340dfd6b4169ebac2f3167e5ea1660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ce78e24e1cbf1115918bbd93da300b4efa5434f21bf1a11669f702a894f64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b94e5ba5276aa39d01479c1eb697edafb939d0e62ec593eed1628e7735e95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e068f8011041fbb83af5bf15d9f856fb111b3fd48d3707507df895249b125646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586bfc35d3a6f331a069b76d004135156f1b13db4afcf14f1404cba6c4ec3627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.872675 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af345b03-7933-405e-9918-4dfa4559aba8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572c8933d715b77d472cb5f4c1e3c78d3a5d9dd6857a061f4db5292274041429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93540db06524b42380aa14ebbb64ece6e98cf8104ccc5930d58ae980e41d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fad2057c1b38b9a7628137d033413b768ea2ff18e1ece27c3db4f9279009ad9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46df3e9a1466ef303cf6f7c703ee28b993ea1ad08bdc870c4298be0ba0804d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.882762 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.882806 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.882816 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.882830 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.882899 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:24Z","lastTransitionTime":"2026-01-31T03:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.895281 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhj5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.985139 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.985175 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.985185 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.985203 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:24 crc kubenswrapper[4667]: I0131 03:48:24.985214 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:24Z","lastTransitionTime":"2026-01-31T03:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.088018 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.088399 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.088416 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.088433 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.088445 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:25Z","lastTransitionTime":"2026-01-31T03:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.192030 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.192069 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.192087 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.192102 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.192113 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:25Z","lastTransitionTime":"2026-01-31T03:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.227308 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 21:43:59.687747487 +0000 UTC Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.294757 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.294803 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.294811 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.294829 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.294857 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:25Z","lastTransitionTime":"2026-01-31T03:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.397931 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.397996 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.398015 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.398053 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.398071 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:25Z","lastTransitionTime":"2026-01-31T03:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.458116 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" event={"ID":"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b","Type":"ContainerStarted","Data":"0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce"} Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.458185 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" event={"ID":"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b","Type":"ContainerStarted","Data":"0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e"} Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.458200 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" event={"ID":"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b","Type":"ContainerStarted","Data":"0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91"} Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.458220 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" event={"ID":"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b","Type":"ContainerStarted","Data":"70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9"} Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.458235 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" event={"ID":"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b","Type":"ContainerStarted","Data":"e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3"} Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.458247 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" event={"ID":"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b","Type":"ContainerStarted","Data":"332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc"} Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.460045 4667 generic.go:334] "Generic (PLEG): container finished" podID="50870207-38dd-40d0-8a53-0eaa3af9d1fb" containerID="f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048" exitCode=0 Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.460096 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zgr94" event={"ID":"50870207-38dd-40d0-8a53-0eaa3af9d1fb","Type":"ContainerDied","Data":"f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048"} Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.474565 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10ccda3-d9b2-4d01-897a-8498aee530b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 03:48:15.785649 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 03:48:15.786510 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:48:15.790183 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1119535395/tls.crt::/tmp/serving-cert-1119535395/tls.key\\\\\\\"\\\\nI0131 03:48:16.086916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 03:48:16.089052 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 03:48:16.089068 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 03:48:16.089086 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 03:48:16.089091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 03:48:16.097787 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 03:48:16.097804 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 03:48:16.097818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 03:48:16.097822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 03:48:16.097825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 03:48:16.098030 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 03:48:16.100791 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:25Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.494052 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:25Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.501819 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.501894 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.501907 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.502376 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.502406 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:25Z","lastTransitionTime":"2026-01-31T03:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.507967 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:25Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.521931 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7daf4c78db3e0b9f6629c1ae75a3dad90a19d8f830bc4e3db8b48c852b3485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:25Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.534441 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523e97dbbec93313d682bbe37cf3b8cf49936d91c8f60915bf1d8849bd53f4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1730e8905dbea5ca3056d2002abe78755bdca22f3fbd66a11bb6c000b2289945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:25Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.551172 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50870207-38dd-40d0-8a53-0eaa3af9d1fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:25Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.564406 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:25Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.579545 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b103bbd2-fb5d-4b2a-8b01-c32f699757df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9ff867bc008c324ad624ff71dcbf4f93b48146483c828ce43d1c10de40b0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://298f76d02f4ede118feca9fc2d4c9c073e2331174dcf673208ed96478b74232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9b7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:25Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.593079 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ns977" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57dcb541-6b8f-4730-9fd8-7ce27870e3a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559ff674832b9bb990309a535c9afb11a4f629b263495bc86311c24730b1a8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccvwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ns977\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:25Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.605098 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.605127 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.605135 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.605151 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.605160 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:25Z","lastTransitionTime":"2026-01-31T03:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.606204 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd764" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b069c8d1-f785-4509-8ee6-7d44525bdc89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3014a6072d180863fd8be274b221dc47c9cd792188b8bc80621db1892ffdf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n8wnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd764\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:25Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.627530 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f495ddf-247c-4cac-979b-710342a770f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8127777e243fb5e93d9dd430fb28ccc91a340dfd6b4169ebac2f3167e5ea1660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ce78e24e1cbf1115918bbd93da300b4efa5434f21bf1a11669f702a894f64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b94e5ba5276aa39d01479c1eb697edafb939d0e62ec593eed1628e7735e95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e068f8011041fbb83af5bf15d9f856fb111b3fd48d3707507df895249b125646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586bfc35d3a6f331a069b76d004135156f1b13db4afcf14f1404cba6c4ec3627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:25Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.639168 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af345b03-7933-405e-9918-4dfa4559aba8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572c8933d715b77d472cb5f4c1e3c78d3a5d9dd6857a061f4db5292274041429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93540db06524b42380aa14ebbb64ece6e98cf8104ccc5930d58ae980e41d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fad2057c1b38b9a7628137d033413b768ea2ff18e1ece27c3db4f9279009ad9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46df3e9a1466ef303cf6f7c703ee28b993ea1ad08bdc870c4298be0ba0804d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:25Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.657868 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhj5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:25Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.669057 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e8e5fbf5b62418d8b08ccaafaf9f565b19d0d1ab8dc1ad4151af14790cf4aa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:25Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.708094 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.708147 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.708159 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.708194 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.708206 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:25Z","lastTransitionTime":"2026-01-31T03:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.810892 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.811074 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.811138 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.811207 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.811271 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:25Z","lastTransitionTime":"2026-01-31T03:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.913376 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.913425 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.913434 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.913449 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:25 crc kubenswrapper[4667]: I0131 03:48:25.913458 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:25Z","lastTransitionTime":"2026-01-31T03:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.015753 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.015781 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.015791 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.015807 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.015817 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:26Z","lastTransitionTime":"2026-01-31T03:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.119303 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.119348 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.119357 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.119374 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.119385 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:26Z","lastTransitionTime":"2026-01-31T03:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.222372 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.222440 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.222470 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.222502 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.222522 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:26Z","lastTransitionTime":"2026-01-31T03:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.227693 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 01:56:26.146082898 +0000 UTC Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.273997 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-2zsr6"] Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.275112 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-2zsr6" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.277582 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.277677 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.278220 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.278491 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.280711 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.280775 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:26 crc kubenswrapper[4667]: E0131 03:48:26.280861 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:26 crc kubenswrapper[4667]: E0131 03:48:26.280989 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.281127 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:26 crc kubenswrapper[4667]: E0131 03:48:26.281268 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.297770 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e8e5fbf5b62418d8b08ccaafaf9f565b19d0d1ab8dc1ad4151af14790cf4aa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:26Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.309809 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97e83040-6e53-4c9c-afda-c21bee92d1b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292wp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:26Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.325927 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.325975 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.325986 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.326005 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.326016 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:26Z","lastTransitionTime":"2026-01-31T03:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.326295 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10ccda3-d9b2-4d01-897a-8498aee530b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 03:48:15.785649 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 03:48:15.786510 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:48:15.790183 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1119535395/tls.crt::/tmp/serving-cert-1119535395/tls.key\\\\\\\"\\\\nI0131 03:48:16.086916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 03:48:16.089052 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 03:48:16.089068 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 03:48:16.089086 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 03:48:16.089091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 03:48:16.097787 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 03:48:16.097804 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 03:48:16.097818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 03:48:16.097822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 03:48:16.097825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 03:48:16.098030 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 03:48:16.100791 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:26Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.342393 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:26Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.354211 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-292wp\" (UniqueName: \"kubernetes.io/projected/97e83040-6e53-4c9c-afda-c21bee92d1b8-kube-api-access-292wp\") pod \"node-ca-2zsr6\" (UID: \"97e83040-6e53-4c9c-afda-c21bee92d1b8\") " pod="openshift-image-registry/node-ca-2zsr6" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.354300 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/97e83040-6e53-4c9c-afda-c21bee92d1b8-host\") pod \"node-ca-2zsr6\" (UID: \"97e83040-6e53-4c9c-afda-c21bee92d1b8\") " pod="openshift-image-registry/node-ca-2zsr6" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.354351 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/97e83040-6e53-4c9c-afda-c21bee92d1b8-serviceca\") pod \"node-ca-2zsr6\" (UID: \"97e83040-6e53-4c9c-afda-c21bee92d1b8\") " pod="openshift-image-registry/node-ca-2zsr6" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.354899 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:26Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.370409 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7daf4c78db3e0b9f6629c1ae75a3dad90a19d8f830bc4e3db8b48c852b3485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:26Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.384327 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523e97dbbec93313d682bbe37cf3b8cf49936d91c8f60915bf1d8849bd53f4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1730e8905dbea5ca3056d2002abe78755bdca22f3fbd66a11bb6c000b2289945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:26Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.399013 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50870207-38dd-40d0-8a53-0eaa3af9d1fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:26Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.411063 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:26Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.425713 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b103bbd2-fb5d-4b2a-8b01-c32f699757df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9ff867bc008c324ad624ff71dcbf4f93b48146483c828ce43d1c10de40b0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://298f76d02f4ede118feca9fc2d4c9c073e2331174dcf673208ed96478b74232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9b7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:26Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.429084 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.429109 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.429119 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.429137 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.429148 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:26Z","lastTransitionTime":"2026-01-31T03:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.435776 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ns977" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57dcb541-6b8f-4730-9fd8-7ce27870e3a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559ff674832b9bb990309a535c9afb11a4f629b263495bc86311c24730b1a8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccvwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ns977\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:26Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.453001 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd764" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b069c8d1-f785-4509-8ee6-7d44525bdc89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3014a6072d180863fd8be274b221dc47c9cd792188b8bc80621db1892ffdf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n8wnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd764\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:26Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.455739 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/97e83040-6e53-4c9c-afda-c21bee92d1b8-host\") pod \"node-ca-2zsr6\" (UID: \"97e83040-6e53-4c9c-afda-c21bee92d1b8\") " pod="openshift-image-registry/node-ca-2zsr6" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.455810 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/97e83040-6e53-4c9c-afda-c21bee92d1b8-serviceca\") pod \"node-ca-2zsr6\" (UID: \"97e83040-6e53-4c9c-afda-c21bee92d1b8\") " pod="openshift-image-registry/node-ca-2zsr6" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.455872 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/97e83040-6e53-4c9c-afda-c21bee92d1b8-host\") pod \"node-ca-2zsr6\" (UID: \"97e83040-6e53-4c9c-afda-c21bee92d1b8\") " pod="openshift-image-registry/node-ca-2zsr6" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.455914 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-292wp\" (UniqueName: \"kubernetes.io/projected/97e83040-6e53-4c9c-afda-c21bee92d1b8-kube-api-access-292wp\") pod \"node-ca-2zsr6\" (UID: \"97e83040-6e53-4c9c-afda-c21bee92d1b8\") " pod="openshift-image-registry/node-ca-2zsr6" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.458424 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/97e83040-6e53-4c9c-afda-c21bee92d1b8-serviceca\") pod \"node-ca-2zsr6\" (UID: \"97e83040-6e53-4c9c-afda-c21bee92d1b8\") " pod="openshift-image-registry/node-ca-2zsr6" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.471575 4667 generic.go:334] "Generic (PLEG): container finished" podID="50870207-38dd-40d0-8a53-0eaa3af9d1fb" containerID="972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19" exitCode=0 Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.471623 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zgr94" event={"ID":"50870207-38dd-40d0-8a53-0eaa3af9d1fb","Type":"ContainerDied","Data":"972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19"} Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.484160 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f495ddf-247c-4cac-979b-710342a770f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8127777e243fb5e93d9dd430fb28ccc91a340dfd6b4169ebac2f3167e5ea1660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ce78e24e1cbf1115918bbd93da300b4efa5434f21bf1a11669f702a894f64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b94e5ba5276aa39d01479c1eb697edafb939d0e62ec593eed1628e7735e95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e068f8011041fbb83af5bf15d9f856fb111b3fd48d3707507df895249b125646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586bfc35d3a6f331a069b76d004135156f1b13db4afcf14f1404cba6c4ec3627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:26Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.486696 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-292wp\" (UniqueName: \"kubernetes.io/projected/97e83040-6e53-4c9c-afda-c21bee92d1b8-kube-api-access-292wp\") pod \"node-ca-2zsr6\" (UID: \"97e83040-6e53-4c9c-afda-c21bee92d1b8\") " pod="openshift-image-registry/node-ca-2zsr6" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.501093 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af345b03-7933-405e-9918-4dfa4559aba8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572c8933d715b77d472cb5f4c1e3c78d3a5d9dd6857a061f4db5292274041429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93540db06524b42380aa14ebbb64ece6e98cf8104ccc5930d58ae980e41d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fad2057c1b38b9a7628137d033413b768ea2ff18e1ece27c3db4f9279009ad9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46df3e9a1466ef303cf6f7c703ee28b993ea1ad08bdc870c4298be0ba0804d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:26Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.530930 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhj5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:26Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.532017 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.532074 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.532085 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.532108 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.532120 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:26Z","lastTransitionTime":"2026-01-31T03:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.550977 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50870207-38dd-40d0-8a53-0eaa3af9d1fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:26Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.563795 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97e83040-6e53-4c9c-afda-c21bee92d1b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292wp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:26Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.580109 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10ccda3-d9b2-4d01-897a-8498aee530b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 03:48:15.785649 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 03:48:15.786510 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:48:15.790183 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1119535395/tls.crt::/tmp/serving-cert-1119535395/tls.key\\\\\\\"\\\\nI0131 03:48:16.086916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 03:48:16.089052 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 03:48:16.089068 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 03:48:16.089086 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 03:48:16.089091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 03:48:16.097787 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 03:48:16.097804 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 03:48:16.097818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 03:48:16.097822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 03:48:16.097825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 03:48:16.098030 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 03:48:16.100791 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:26Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.590953 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:26Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.598132 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-2zsr6" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.604716 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:26Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.621097 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7daf4c78db3e0b9f6629c1ae75a3dad90a19d8f830bc4e3db8b48c852b3485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:26Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.642427 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523e97dbbec93313d682bbe37cf3b8cf49936d91c8f60915bf1d8849bd53f4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1730e8905dbea5ca3056d2002abe78755bdca22f3fbd66a11bb6c000b2289945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:26Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.644959 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.645005 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.645019 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.645041 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.645057 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:26Z","lastTransitionTime":"2026-01-31T03:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.660283 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:26Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.678163 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b103bbd2-fb5d-4b2a-8b01-c32f699757df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9ff867bc008c324ad624ff71dcbf4f93b48146483c828ce43d1c10de40b0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://298f76d02f4ede118feca9fc2d4c9c073e2331174dcf673208ed96478b74232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9b7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:26Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.690063 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ns977" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57dcb541-6b8f-4730-9fd8-7ce27870e3a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559ff674832b9bb990309a535c9afb11a4f629b263495bc86311c24730b1a8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccvwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ns977\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:26Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.705766 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd764" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b069c8d1-f785-4509-8ee6-7d44525bdc89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3014a6072d180863fd8be274b221dc47c9cd792188b8bc80621db1892ffdf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n8wnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd764\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:26Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.726079 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f495ddf-247c-4cac-979b-710342a770f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8127777e243fb5e93d9dd430fb28ccc91a340dfd6b4169ebac2f3167e5ea1660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ce78e24e1cbf1115918bbd93da300b4efa5434f21bf1a11669f702a894f64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b94e5ba5276aa39d01479c1eb697edafb939d0e62ec593eed1628e7735e95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e068f8011041fbb83af5bf15d9f856fb111b3fd48d3707507df895249b125646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586bfc35d3a6f331a069b76d004135156f1b13db4afcf14f1404cba6c4ec3627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:26Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.740938 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af345b03-7933-405e-9918-4dfa4559aba8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572c8933d715b77d472cb5f4c1e3c78d3a5d9dd6857a061f4db5292274041429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93540db06524b42380aa14ebbb64ece6e98cf8104ccc5930d58ae980e41d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fad2057c1b38b9a7628137d033413b768ea2ff18e1ece27c3db4f9279009ad9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46df3e9a1466ef303cf6f7c703ee28b993ea1ad08bdc870c4298be0ba0804d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:26Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.750005 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.750045 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.750057 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.750080 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.750094 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:26Z","lastTransitionTime":"2026-01-31T03:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.762502 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhj5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:26Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.780296 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e8e5fbf5b62418d8b08ccaafaf9f565b19d0d1ab8dc1ad4151af14790cf4aa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:26Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.852948 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.852999 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.853009 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.853029 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.853039 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:26Z","lastTransitionTime":"2026-01-31T03:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.955806 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.955865 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.955875 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.955892 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:26 crc kubenswrapper[4667]: I0131 03:48:26.955902 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:26Z","lastTransitionTime":"2026-01-31T03:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.044986 4667 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 31 03:48:27 crc kubenswrapper[4667]: W0131 03:48:27.045559 4667 reflector.go:484] object-"openshift-image-registry"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 31 03:48:27 crc kubenswrapper[4667]: W0131 03:48:27.045626 4667 reflector.go:484] object-"openshift-image-registry"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 31 03:48:27 crc kubenswrapper[4667]: W0131 03:48:27.045922 4667 reflector.go:484] object-"openshift-image-registry"/"image-registry-certificates": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"image-registry-certificates": Unexpected watch close - watch lasted less than a second and no items received Jan 31 03:48:27 crc kubenswrapper[4667]: W0131 03:48:27.046105 4667 reflector.go:484] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": watch of *v1.Secret ended with: very short watch: object-"openshift-image-registry"/"node-ca-dockercfg-4777p": Unexpected watch close - watch lasted less than a second and no items received Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.058205 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.058236 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.058246 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.058264 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.058277 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:27Z","lastTransitionTime":"2026-01-31T03:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.160866 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.161298 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.161313 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.161334 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.161351 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:27Z","lastTransitionTime":"2026-01-31T03:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.227974 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 03:59:41.191370733 +0000 UTC Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.264573 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.264618 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.264784 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.264970 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.264986 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:27Z","lastTransitionTime":"2026-01-31T03:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.294666 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd764" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b069c8d1-f785-4509-8ee6-7d44525bdc89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3014a6072d180863fd8be274b221dc47c9cd792188b8bc80621db1892ffdf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n8wnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd764\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:27Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.312005 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:27Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.326061 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b103bbd2-fb5d-4b2a-8b01-c32f699757df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9ff867bc008c324ad624ff71dcbf4f93b48146483c828ce43d1c10de40b0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://298f76d02f4ede118feca9fc2d4c9c073e2331174dcf673208ed96478b74232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9b7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:27Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.342664 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ns977" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57dcb541-6b8f-4730-9fd8-7ce27870e3a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559ff674832b9bb990309a535c9afb11a4f629b263495bc86311c24730b1a8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccvwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ns977\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:27Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.365120 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhj5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:27Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.368352 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.368406 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.368436 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.368457 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.368470 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:27Z","lastTransitionTime":"2026-01-31T03:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.397725 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f495ddf-247c-4cac-979b-710342a770f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8127777e243fb5e93d9dd430fb28ccc91a340dfd6b4169ebac2f3167e5ea1660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ce78e24e1cbf1115918bbd93da300b4efa5434f21bf1a11669f702a894f64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b94e5ba5276aa39d01479c1eb697edafb939d0e62ec593eed1628e7735e95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e068f8011041fbb83af5bf15d9f856fb111b3fd48d3707507df895249b125646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586bfc35d3a6f331a069b76d004135156f1b13db4afcf14f1404cba6c4ec3627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:27Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.414559 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af345b03-7933-405e-9918-4dfa4559aba8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572c8933d715b77d472cb5f4c1e3c78d3a5d9dd6857a061f4db5292274041429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93540db06524b42380aa14ebbb64ece6e98cf8104ccc5930d58ae980e41d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fad2057c1b38b9a7628137d033413b768ea2ff18e1ece27c3db4f9279009ad9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46df3e9a1466ef303cf6f7c703ee28b993ea1ad08bdc870c4298be0ba0804d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:27Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.436092 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e8e5fbf5b62418d8b08ccaafaf9f565b19d0d1ab8dc1ad4151af14790cf4aa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:27Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.451158 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7daf4c78db3e0b9f6629c1ae75a3dad90a19d8f830bc4e3db8b48c852b3485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:27Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.466716 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523e97dbbec93313d682bbe37cf3b8cf49936d91c8f60915bf1d8849bd53f4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1730e8905dbea5ca3056d2002abe78755bdca22f3fbd66a11bb6c000b2289945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:27Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.471589 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.471630 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.471642 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.471663 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.471675 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:27Z","lastTransitionTime":"2026-01-31T03:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.479952 4667 generic.go:334] "Generic (PLEG): container finished" podID="50870207-38dd-40d0-8a53-0eaa3af9d1fb" containerID="370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c" exitCode=0 Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.480022 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zgr94" event={"ID":"50870207-38dd-40d0-8a53-0eaa3af9d1fb","Type":"ContainerDied","Data":"370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c"} Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.482289 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-2zsr6" event={"ID":"97e83040-6e53-4c9c-afda-c21bee92d1b8","Type":"ContainerStarted","Data":"a5d85015202ca538e52ac5ea41e417dd6c76f81b7191007983ec9bf7fde68eb9"} Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.483059 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-2zsr6" event={"ID":"97e83040-6e53-4c9c-afda-c21bee92d1b8","Type":"ContainerStarted","Data":"f7630062b5278d03869e24c0ef75f03e54a467acc6877f58aeef4c811915a074"} Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.489655 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50870207-38dd-40d0-8a53-0eaa3af9d1fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:27Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.490363 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" event={"ID":"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b","Type":"ContainerStarted","Data":"c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d"} Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.505944 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97e83040-6e53-4c9c-afda-c21bee92d1b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292wp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:27Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.524343 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10ccda3-d9b2-4d01-897a-8498aee530b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 03:48:15.785649 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 03:48:15.786510 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:48:15.790183 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1119535395/tls.crt::/tmp/serving-cert-1119535395/tls.key\\\\\\\"\\\\nI0131 03:48:16.086916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 03:48:16.089052 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 03:48:16.089068 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 03:48:16.089086 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 03:48:16.089091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 03:48:16.097787 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 03:48:16.097804 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 03:48:16.097818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 03:48:16.097822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 03:48:16.097825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 03:48:16.098030 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 03:48:16.100791 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:27Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.545939 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:27Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.572155 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:27Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.576315 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.576359 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.576369 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.576389 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.576399 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:27Z","lastTransitionTime":"2026-01-31T03:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.592149 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10ccda3-d9b2-4d01-897a-8498aee530b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 03:48:15.785649 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 03:48:15.786510 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:48:15.790183 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1119535395/tls.crt::/tmp/serving-cert-1119535395/tls.key\\\\\\\"\\\\nI0131 03:48:16.086916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 03:48:16.089052 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 03:48:16.089068 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 03:48:16.089086 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 03:48:16.089091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 03:48:16.097787 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 03:48:16.097804 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 03:48:16.097818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 03:48:16.097822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 03:48:16.097825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 03:48:16.098030 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 03:48:16.100791 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:27Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.614225 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:27Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.644941 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:27Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.703206 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.703261 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.703273 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.703296 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.703311 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:27Z","lastTransitionTime":"2026-01-31T03:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.706515 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7daf4c78db3e0b9f6629c1ae75a3dad90a19d8f830bc4e3db8b48c852b3485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:27Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.721538 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523e97dbbec93313d682bbe37cf3b8cf49936d91c8f60915bf1d8849bd53f4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1730e8905dbea5ca3056d2002abe78755bdca22f3fbd66a11bb6c000b2289945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:27Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.737691 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50870207-38dd-40d0-8a53-0eaa3af9d1fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:27Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.751675 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97e83040-6e53-4c9c-afda-c21bee92d1b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5d85015202ca538e52ac5ea41e417dd6c76f81b7191007983ec9bf7fde68eb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292wp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:27Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.767358 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:27Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.783967 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b103bbd2-fb5d-4b2a-8b01-c32f699757df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9ff867bc008c324ad624ff71dcbf4f93b48146483c828ce43d1c10de40b0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://298f76d02f4ede118feca9fc2d4c9c073e2331174dcf673208ed96478b74232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9b7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:27Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.795008 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ns977" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57dcb541-6b8f-4730-9fd8-7ce27870e3a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559ff674832b9bb990309a535c9afb11a4f629b263495bc86311c24730b1a8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccvwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ns977\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:27Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.807121 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.807170 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.807179 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.807202 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.807213 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:27Z","lastTransitionTime":"2026-01-31T03:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.808631 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd764" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b069c8d1-f785-4509-8ee6-7d44525bdc89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3014a6072d180863fd8be274b221dc47c9cd792188b8bc80621db1892ffdf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n8wnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd764\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:27Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.827933 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f495ddf-247c-4cac-979b-710342a770f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8127777e243fb5e93d9dd430fb28ccc91a340dfd6b4169ebac2f3167e5ea1660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ce78e24e1cbf1115918bbd93da300b4efa5434f21bf1a11669f702a894f64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b94e5ba5276aa39d01479c1eb697edafb939d0e62ec593eed1628e7735e95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e068f8011041fbb83af5bf15d9f856fb111b3fd48d3707507df895249b125646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586bfc35d3a6f331a069b76d004135156f1b13db4afcf14f1404cba6c4ec3627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:27Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.844250 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af345b03-7933-405e-9918-4dfa4559aba8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572c8933d715b77d472cb5f4c1e3c78d3a5d9dd6857a061f4db5292274041429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93540db06524b42380aa14ebbb64ece6e98cf8104ccc5930d58ae980e41d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fad2057c1b38b9a7628137d033413b768ea2ff18e1ece27c3db4f9279009ad9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46df3e9a1466ef303cf6f7c703ee28b993ea1ad08bdc870c4298be0ba0804d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:27Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.864966 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhj5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:27Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.877997 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e8e5fbf5b62418d8b08ccaafaf9f565b19d0d1ab8dc1ad4151af14790cf4aa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:27Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.910444 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.910493 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.910509 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.910531 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:27 crc kubenswrapper[4667]: I0131 03:48:27.910543 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:27Z","lastTransitionTime":"2026-01-31T03:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.013026 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.013064 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.013078 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.013151 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.013173 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:28Z","lastTransitionTime":"2026-01-31T03:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.117695 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.117762 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.117782 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.117810 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.117829 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:28Z","lastTransitionTime":"2026-01-31T03:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.181795 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.220915 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.220959 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.220970 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.220987 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.221002 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:28Z","lastTransitionTime":"2026-01-31T03:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.228664 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 10:50:01.986235337 +0000 UTC Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.281094 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.281150 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.281100 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:28 crc kubenswrapper[4667]: E0131 03:48:28.281329 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:28 crc kubenswrapper[4667]: E0131 03:48:28.281411 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:28 crc kubenswrapper[4667]: E0131 03:48:28.281485 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.324636 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.324708 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.324728 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.324758 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.324780 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:28Z","lastTransitionTime":"2026-01-31T03:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.325127 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.385321 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.429725 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.429787 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.429806 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.429832 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.429880 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:28Z","lastTransitionTime":"2026-01-31T03:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.499035 4667 generic.go:334] "Generic (PLEG): container finished" podID="50870207-38dd-40d0-8a53-0eaa3af9d1fb" containerID="d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b" exitCode=0 Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.499804 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zgr94" event={"ID":"50870207-38dd-40d0-8a53-0eaa3af9d1fb","Type":"ContainerDied","Data":"d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b"} Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.535618 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.535698 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.535718 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.535754 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.535774 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:28Z","lastTransitionTime":"2026-01-31T03:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.541893 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f495ddf-247c-4cac-979b-710342a770f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8127777e243fb5e93d9dd430fb28ccc91a340dfd6b4169ebac2f3167e5ea1660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ce78e24e1cbf1115918bbd93da300b4efa5434f21bf1a11669f702a894f64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b94e5ba5276aa39d01479c1eb697edafb939d0e62ec593eed1628e7735e95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e068f8011041fbb83af5bf15d9f856fb111b3fd48d3707507df895249b125646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586bfc35d3a6f331a069b76d004135156f1b13db4afcf14f1404cba6c4ec3627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.565731 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af345b03-7933-405e-9918-4dfa4559aba8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572c8933d715b77d472cb5f4c1e3c78d3a5d9dd6857a061f4db5292274041429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93540db06524b42380aa14ebbb64ece6e98cf8104ccc5930d58ae980e41d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fad2057c1b38b9a7628137d033413b768ea2ff18e1ece27c3db4f9279009ad9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46df3e9a1466ef303cf6f7c703ee28b993ea1ad08bdc870c4298be0ba0804d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.572418 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.598194 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhj5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.619513 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e8e5fbf5b62418d8b08ccaafaf9f565b19d0d1ab8dc1ad4151af14790cf4aa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.638645 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.638683 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.638695 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.638734 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.638747 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:28Z","lastTransitionTime":"2026-01-31T03:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.638814 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10ccda3-d9b2-4d01-897a-8498aee530b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 03:48:15.785649 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 03:48:15.786510 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:48:15.790183 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1119535395/tls.crt::/tmp/serving-cert-1119535395/tls.key\\\\\\\"\\\\nI0131 03:48:16.086916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 03:48:16.089052 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 03:48:16.089068 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 03:48:16.089086 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 03:48:16.089091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 03:48:16.097787 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 03:48:16.097804 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 03:48:16.097818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 03:48:16.097822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 03:48:16.097825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 03:48:16.098030 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 03:48:16.100791 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.659146 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.676314 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.690021 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7daf4c78db3e0b9f6629c1ae75a3dad90a19d8f830bc4e3db8b48c852b3485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.705608 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523e97dbbec93313d682bbe37cf3b8cf49936d91c8f60915bf1d8849bd53f4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1730e8905dbea5ca3056d2002abe78755bdca22f3fbd66a11bb6c000b2289945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.738476 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50870207-38dd-40d0-8a53-0eaa3af9d1fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.747013 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.747104 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.747122 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.747171 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.747188 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:28Z","lastTransitionTime":"2026-01-31T03:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.753931 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97e83040-6e53-4c9c-afda-c21bee92d1b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5d85015202ca538e52ac5ea41e417dd6c76f81b7191007983ec9bf7fde68eb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292wp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.769910 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.787289 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b103bbd2-fb5d-4b2a-8b01-c32f699757df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9ff867bc008c324ad624ff71dcbf4f93b48146483c828ce43d1c10de40b0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://298f76d02f4ede118feca9fc2d4c9c073e2331174dcf673208ed96478b74232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9b7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.800823 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ns977" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57dcb541-6b8f-4730-9fd8-7ce27870e3a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559ff674832b9bb990309a535c9afb11a4f629b263495bc86311c24730b1a8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccvwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ns977\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.822654 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd764" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b069c8d1-f785-4509-8ee6-7d44525bdc89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3014a6072d180863fd8be274b221dc47c9cd792188b8bc80621db1892ffdf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n8wnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd764\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:28Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.850529 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.850587 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.850600 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.850621 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.850634 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:28Z","lastTransitionTime":"2026-01-31T03:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.954679 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.954738 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.954755 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.954779 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:28 crc kubenswrapper[4667]: I0131 03:48:28.954796 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:28Z","lastTransitionTime":"2026-01-31T03:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.058516 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.058586 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.058600 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.058625 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.058642 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:29Z","lastTransitionTime":"2026-01-31T03:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.162036 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.162118 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.162137 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.162168 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.162191 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:29Z","lastTransitionTime":"2026-01-31T03:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.230508 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 22:45:24.778667267 +0000 UTC Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.265123 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.265171 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.265183 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.265202 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.265216 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:29Z","lastTransitionTime":"2026-01-31T03:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.367865 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.367946 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.367966 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.367992 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.368007 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:29Z","lastTransitionTime":"2026-01-31T03:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.473405 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.473971 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.473984 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.474006 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.474021 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:29Z","lastTransitionTime":"2026-01-31T03:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.509226 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" event={"ID":"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b","Type":"ContainerStarted","Data":"c30a245617d1b880fc338f1a0a001f477309db4d1e047031d4f3f169e499f46b"} Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.511004 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.511161 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.516633 4667 generic.go:334] "Generic (PLEG): container finished" podID="50870207-38dd-40d0-8a53-0eaa3af9d1fb" containerID="6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a" exitCode=0 Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.516701 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zgr94" event={"ID":"50870207-38dd-40d0-8a53-0eaa3af9d1fb","Type":"ContainerDied","Data":"6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a"} Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.536573 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f495ddf-247c-4cac-979b-710342a770f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8127777e243fb5e93d9dd430fb28ccc91a340dfd6b4169ebac2f3167e5ea1660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ce78e24e1cbf1115918bbd93da300b4efa5434f21bf1a11669f702a894f64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b94e5ba5276aa39d01479c1eb697edafb939d0e62ec593eed1628e7735e95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e068f8011041fbb83af5bf15d9f856fb111b3fd48d3707507df895249b125646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586bfc35d3a6f331a069b76d004135156f1b13db4afcf14f1404cba6c4ec3627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:29Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.542705 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.563762 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.566493 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af345b03-7933-405e-9918-4dfa4559aba8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572c8933d715b77d472cb5f4c1e3c78d3a5d9dd6857a061f4db5292274041429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93540db06524b42380aa14ebbb64ece6e98cf8104ccc5930d58ae980e41d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fad2057c1b38b9a7628137d033413b768ea2ff18e1ece27c3db4f9279009ad9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46df3e9a1466ef303cf6f7c703ee28b993ea1ad08bdc870c4298be0ba0804d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:29Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.576830 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.576885 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.576896 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.576914 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.576924 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:29Z","lastTransitionTime":"2026-01-31T03:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.582493 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30a245617d1b880fc338f1a0a001f477309db4d1e047031d4f3f169e499f46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhj5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:29Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.594666 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e8e5fbf5b62418d8b08ccaafaf9f565b19d0d1ab8dc1ad4151af14790cf4aa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:29Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.604859 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97e83040-6e53-4c9c-afda-c21bee92d1b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5d85015202ca538e52ac5ea41e417dd6c76f81b7191007983ec9bf7fde68eb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292wp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:29Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.623731 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10ccda3-d9b2-4d01-897a-8498aee530b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 03:48:15.785649 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 03:48:15.786510 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:48:15.790183 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1119535395/tls.crt::/tmp/serving-cert-1119535395/tls.key\\\\\\\"\\\\nI0131 03:48:16.086916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 03:48:16.089052 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 03:48:16.089068 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 03:48:16.089086 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 03:48:16.089091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 03:48:16.097787 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 03:48:16.097804 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 03:48:16.097818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 03:48:16.097822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 03:48:16.097825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 03:48:16.098030 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 03:48:16.100791 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:29Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.638894 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:29Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.651477 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:29Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.663435 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7daf4c78db3e0b9f6629c1ae75a3dad90a19d8f830bc4e3db8b48c852b3485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:29Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.676963 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523e97dbbec93313d682bbe37cf3b8cf49936d91c8f60915bf1d8849bd53f4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1730e8905dbea5ca3056d2002abe78755bdca22f3fbd66a11bb6c000b2289945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:29Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.679648 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.679688 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.679696 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.679711 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.679721 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:29Z","lastTransitionTime":"2026-01-31T03:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.693019 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50870207-38dd-40d0-8a53-0eaa3af9d1fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:29Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.708172 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:29Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.724291 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b103bbd2-fb5d-4b2a-8b01-c32f699757df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9ff867bc008c324ad624ff71dcbf4f93b48146483c828ce43d1c10de40b0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://298f76d02f4ede118feca9fc2d4c9c073e2331174dcf673208ed96478b74232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9b7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:29Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.737366 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ns977" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57dcb541-6b8f-4730-9fd8-7ce27870e3a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559ff674832b9bb990309a535c9afb11a4f629b263495bc86311c24730b1a8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccvwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ns977\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:29Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.754569 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd764" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b069c8d1-f785-4509-8ee6-7d44525bdc89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3014a6072d180863fd8be274b221dc47c9cd792188b8bc80621db1892ffdf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n8wnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd764\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:29Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.774111 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e8e5fbf5b62418d8b08ccaafaf9f565b19d0d1ab8dc1ad4151af14790cf4aa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:29Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.782875 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.782915 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.782926 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.782946 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.782958 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:29Z","lastTransitionTime":"2026-01-31T03:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.788972 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523e97dbbec93313d682bbe37cf3b8cf49936d91c8f60915bf1d8849bd53f4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1730e8905dbea5ca3056d2002abe78755bdca22f3fbd66a11bb6c000b2289945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:29Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.805978 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50870207-38dd-40d0-8a53-0eaa3af9d1fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:29Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.817513 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97e83040-6e53-4c9c-afda-c21bee92d1b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5d85015202ca538e52ac5ea41e417dd6c76f81b7191007983ec9bf7fde68eb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292wp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:29Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.831486 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10ccda3-d9b2-4d01-897a-8498aee530b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 03:48:15.785649 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 03:48:15.786510 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:48:15.790183 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1119535395/tls.crt::/tmp/serving-cert-1119535395/tls.key\\\\\\\"\\\\nI0131 03:48:16.086916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 03:48:16.089052 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 03:48:16.089068 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 03:48:16.089086 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 03:48:16.089091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 03:48:16.097787 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 03:48:16.097804 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 03:48:16.097818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 03:48:16.097822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 03:48:16.097825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 03:48:16.098030 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 03:48:16.100791 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:29Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.849090 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:29Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.865040 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:29Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.878399 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7daf4c78db3e0b9f6629c1ae75a3dad90a19d8f830bc4e3db8b48c852b3485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:29Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.885453 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.885486 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.885499 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.885517 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.885530 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:29Z","lastTransitionTime":"2026-01-31T03:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.889743 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:29Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.903381 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b103bbd2-fb5d-4b2a-8b01-c32f699757df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9ff867bc008c324ad624ff71dcbf4f93b48146483c828ce43d1c10de40b0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://298f76d02f4ede118feca9fc2d4c9c073e2331174dcf673208ed96478b74232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9b7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:29Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.915750 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ns977" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57dcb541-6b8f-4730-9fd8-7ce27870e3a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559ff674832b9bb990309a535c9afb11a4f629b263495bc86311c24730b1a8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccvwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ns977\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:29Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.931604 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd764" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b069c8d1-f785-4509-8ee6-7d44525bdc89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3014a6072d180863fd8be274b221dc47c9cd792188b8bc80621db1892ffdf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n8wnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd764\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:29Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.954912 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f495ddf-247c-4cac-979b-710342a770f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8127777e243fb5e93d9dd430fb28ccc91a340dfd6b4169ebac2f3167e5ea1660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ce78e24e1cbf1115918bbd93da300b4efa5434f21bf1a11669f702a894f64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b94e5ba5276aa39d01479c1eb697edafb939d0e62ec593eed1628e7735e95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e068f8011041fbb83af5bf15d9f856fb111b3fd48d3707507df895249b125646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586bfc35d3a6f331a069b76d004135156f1b13db4afcf14f1404cba6c4ec3627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:29Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.968246 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af345b03-7933-405e-9918-4dfa4559aba8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572c8933d715b77d472cb5f4c1e3c78d3a5d9dd6857a061f4db5292274041429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93540db06524b42380aa14ebbb64ece6e98cf8104ccc5930d58ae980e41d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fad2057c1b38b9a7628137d033413b768ea2ff18e1ece27c3db4f9279009ad9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46df3e9a1466ef303cf6f7c703ee28b993ea1ad08bdc870c4298be0ba0804d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:29Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.987405 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30a245617d1b880fc338f1a0a001f477309db4d1e047031d4f3f169e499f46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhj5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:29Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.988327 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.988365 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.988377 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.988392 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:29 crc kubenswrapper[4667]: I0131 03:48:29.988404 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:29Z","lastTransitionTime":"2026-01-31T03:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.092251 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.092514 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.092596 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.092707 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.092786 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:30Z","lastTransitionTime":"2026-01-31T03:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.195372 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.195415 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.195427 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.195445 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.195459 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:30Z","lastTransitionTime":"2026-01-31T03:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.230715 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 12:50:43.519638647 +0000 UTC Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.281197 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.281247 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.281279 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:30 crc kubenswrapper[4667]: E0131 03:48:30.281880 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:30 crc kubenswrapper[4667]: E0131 03:48:30.281904 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:30 crc kubenswrapper[4667]: E0131 03:48:30.282107 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.297592 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.297633 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.297642 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.297656 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.297665 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:30Z","lastTransitionTime":"2026-01-31T03:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.400365 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.400971 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.401201 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.401444 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.401626 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:30Z","lastTransitionTime":"2026-01-31T03:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.505190 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.505546 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.505686 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.505792 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.505957 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:30Z","lastTransitionTime":"2026-01-31T03:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.524426 4667 generic.go:334] "Generic (PLEG): container finished" podID="50870207-38dd-40d0-8a53-0eaa3af9d1fb" containerID="e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006" exitCode=0 Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.524545 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zgr94" event={"ID":"50870207-38dd-40d0-8a53-0eaa3af9d1fb","Type":"ContainerDied","Data":"e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006"} Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.524620 4667 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.552062 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd764" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b069c8d1-f785-4509-8ee6-7d44525bdc89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3014a6072d180863fd8be274b221dc47c9cd792188b8bc80621db1892ffdf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n8wnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd764\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:30Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.574354 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:30Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.597933 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b103bbd2-fb5d-4b2a-8b01-c32f699757df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9ff867bc008c324ad624ff71dcbf4f93b48146483c828ce43d1c10de40b0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://298f76d02f4ede118feca9fc2d4c9c073e2331174dcf673208ed96478b74232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9b7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:30Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.615753 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.615804 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.615817 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.615916 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.615932 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:30Z","lastTransitionTime":"2026-01-31T03:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.616982 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ns977" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57dcb541-6b8f-4730-9fd8-7ce27870e3a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559ff674832b9bb990309a535c9afb11a4f629b263495bc86311c24730b1a8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccvwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ns977\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:30Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.650028 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30a245617d1b880fc338f1a0a001f477309db4d1e047031d4f3f169e499f46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhj5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:30Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.675687 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f495ddf-247c-4cac-979b-710342a770f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8127777e243fb5e93d9dd430fb28ccc91a340dfd6b4169ebac2f3167e5ea1660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ce78e24e1cbf1115918bbd93da300b4efa5434f21bf1a11669f702a894f64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b94e5ba5276aa39d01479c1eb697edafb939d0e62ec593eed1628e7735e95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e068f8011041fbb83af5bf15d9f856fb111b3fd48d3707507df895249b125646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586bfc35d3a6f331a069b76d004135156f1b13db4afcf14f1404cba6c4ec3627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:30Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.698751 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af345b03-7933-405e-9918-4dfa4559aba8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572c8933d715b77d472cb5f4c1e3c78d3a5d9dd6857a061f4db5292274041429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93540db06524b42380aa14ebbb64ece6e98cf8104ccc5930d58ae980e41d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fad2057c1b38b9a7628137d033413b768ea2ff18e1ece27c3db4f9279009ad9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46df3e9a1466ef303cf6f7c703ee28b993ea1ad08bdc870c4298be0ba0804d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:30Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.718804 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e8e5fbf5b62418d8b08ccaafaf9f565b19d0d1ab8dc1ad4151af14790cf4aa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:30Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.719812 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.719873 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.719888 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.719908 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.719918 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:30Z","lastTransitionTime":"2026-01-31T03:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.731707 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7daf4c78db3e0b9f6629c1ae75a3dad90a19d8f830bc4e3db8b48c852b3485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:30Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.745959 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523e97dbbec93313d682bbe37cf3b8cf49936d91c8f60915bf1d8849bd53f4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1730e8905dbea5ca3056d2002abe78755bdca22f3fbd66a11bb6c000b2289945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:30Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.762581 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50870207-38dd-40d0-8a53-0eaa3af9d1fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:30Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.778124 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97e83040-6e53-4c9c-afda-c21bee92d1b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5d85015202ca538e52ac5ea41e417dd6c76f81b7191007983ec9bf7fde68eb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292wp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:30Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.792350 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10ccda3-d9b2-4d01-897a-8498aee530b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 03:48:15.785649 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 03:48:15.786510 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:48:15.790183 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1119535395/tls.crt::/tmp/serving-cert-1119535395/tls.key\\\\\\\"\\\\nI0131 03:48:16.086916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 03:48:16.089052 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 03:48:16.089068 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 03:48:16.089086 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 03:48:16.089091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 03:48:16.097787 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 03:48:16.097804 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 03:48:16.097818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 03:48:16.097822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 03:48:16.097825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 03:48:16.098030 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 03:48:16.100791 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:30Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.804207 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:30Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.815090 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:30Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.822138 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.822173 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.822182 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.822201 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.822212 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:30Z","lastTransitionTime":"2026-01-31T03:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.925869 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.925923 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.925934 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.925951 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:30 crc kubenswrapper[4667]: I0131 03:48:30.925961 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:30Z","lastTransitionTime":"2026-01-31T03:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.029234 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.029284 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.029294 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.029311 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.029320 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:31Z","lastTransitionTime":"2026-01-31T03:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.132777 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.133173 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.133194 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.133218 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.133235 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:31Z","lastTransitionTime":"2026-01-31T03:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.230878 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 18:37:48.507724131 +0000 UTC Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.235385 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.235442 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.235454 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.235476 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.235489 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:31Z","lastTransitionTime":"2026-01-31T03:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.339265 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.339638 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.339903 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.340228 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.340447 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:31Z","lastTransitionTime":"2026-01-31T03:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.445365 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.445830 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.446074 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.446373 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.446582 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:31Z","lastTransitionTime":"2026-01-31T03:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.533820 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zgr94" event={"ID":"50870207-38dd-40d0-8a53-0eaa3af9d1fb","Type":"ContainerStarted","Data":"7a1c7d3d73b43c4c32aba4ba0704c399d72ff80eff878183b5791be243b17bee"} Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.534006 4667 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.549386 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.549431 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.549444 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.549462 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.549474 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:31Z","lastTransitionTime":"2026-01-31T03:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.566042 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30a245617d1b880fc338f1a0a001f477309db4d1e047031d4f3f169e499f46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhj5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:31Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.600703 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f495ddf-247c-4cac-979b-710342a770f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8127777e243fb5e93d9dd430fb28ccc91a340dfd6b4169ebac2f3167e5ea1660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ce78e24e1cbf1115918bbd93da300b4efa5434f21bf1a11669f702a894f64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b94e5ba5276aa39d01479c1eb697edafb939d0e62ec593eed1628e7735e95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e068f8011041fbb83af5bf15d9f856fb111b3fd48d3707507df895249b125646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586bfc35d3a6f331a069b76d004135156f1b13db4afcf14f1404cba6c4ec3627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:31Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.617976 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af345b03-7933-405e-9918-4dfa4559aba8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572c8933d715b77d472cb5f4c1e3c78d3a5d9dd6857a061f4db5292274041429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93540db06524b42380aa14ebbb64ece6e98cf8104ccc5930d58ae980e41d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fad2057c1b38b9a7628137d033413b768ea2ff18e1ece27c3db4f9279009ad9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46df3e9a1466ef303cf6f7c703ee28b993ea1ad08bdc870c4298be0ba0804d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:31Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.633101 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e8e5fbf5b62418d8b08ccaafaf9f565b19d0d1ab8dc1ad4151af14790cf4aa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:31Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.650388 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7daf4c78db3e0b9f6629c1ae75a3dad90a19d8f830bc4e3db8b48c852b3485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:31Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.651975 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.652197 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.652341 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.652481 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.652604 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:31Z","lastTransitionTime":"2026-01-31T03:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.670421 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523e97dbbec93313d682bbe37cf3b8cf49936d91c8f60915bf1d8849bd53f4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1730e8905dbea5ca3056d2002abe78755bdca22f3fbd66a11bb6c000b2289945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:31Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.695115 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50870207-38dd-40d0-8a53-0eaa3af9d1fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1c7d3d73b43c4c32aba4ba0704c399d72ff80eff878183b5791be243b17bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:31Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.710481 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97e83040-6e53-4c9c-afda-c21bee92d1b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5d85015202ca538e52ac5ea41e417dd6c76f81b7191007983ec9bf7fde68eb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292wp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:31Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.739453 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10ccda3-d9b2-4d01-897a-8498aee530b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 03:48:15.785649 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 03:48:15.786510 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:48:15.790183 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1119535395/tls.crt::/tmp/serving-cert-1119535395/tls.key\\\\\\\"\\\\nI0131 03:48:16.086916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 03:48:16.089052 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 03:48:16.089068 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 03:48:16.089086 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 03:48:16.089091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 03:48:16.097787 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 03:48:16.097804 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 03:48:16.097818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 03:48:16.097822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 03:48:16.097825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 03:48:16.098030 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 03:48:16.100791 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:31Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.756073 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.756281 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.756378 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.756470 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.756588 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:31Z","lastTransitionTime":"2026-01-31T03:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.756736 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:31Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.778116 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:31Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.797613 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd764" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b069c8d1-f785-4509-8ee6-7d44525bdc89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3014a6072d180863fd8be274b221dc47c9cd792188b8bc80621db1892ffdf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n8wnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd764\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:31Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.812298 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:31Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.827006 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b103bbd2-fb5d-4b2a-8b01-c32f699757df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9ff867bc008c324ad624ff71dcbf4f93b48146483c828ce43d1c10de40b0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://298f76d02f4ede118feca9fc2d4c9c073e2331174dcf673208ed96478b74232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9b7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:31Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.837748 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ns977" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57dcb541-6b8f-4730-9fd8-7ce27870e3a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559ff674832b9bb990309a535c9afb11a4f629b263495bc86311c24730b1a8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccvwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ns977\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:31Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.841116 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:48:31 crc kubenswrapper[4667]: E0131 03:48:31.841337 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:48:47.841301433 +0000 UTC m=+51.357636732 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.858647 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.858690 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.858700 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.858720 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.858732 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:31Z","lastTransitionTime":"2026-01-31T03:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.942714 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.942800 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.942837 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.942925 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:31 crc kubenswrapper[4667]: E0131 03:48:31.942975 4667 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 03:48:31 crc kubenswrapper[4667]: E0131 03:48:31.942998 4667 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 03:48:31 crc kubenswrapper[4667]: E0131 03:48:31.943450 4667 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 03:48:31 crc kubenswrapper[4667]: E0131 03:48:31.943477 4667 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:48:31 crc kubenswrapper[4667]: E0131 03:48:31.943120 4667 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 03:48:31 crc kubenswrapper[4667]: E0131 03:48:31.943548 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 03:48:47.94351908 +0000 UTC m=+51.459854419 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:48:31 crc kubenswrapper[4667]: E0131 03:48:31.943172 4667 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 03:48:31 crc kubenswrapper[4667]: E0131 03:48:31.943640 4667 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 03:48:31 crc kubenswrapper[4667]: E0131 03:48:31.943663 4667 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:48:31 crc kubenswrapper[4667]: E0131 03:48:31.943597 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 03:48:47.943584512 +0000 UTC m=+51.459919851 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 03:48:31 crc kubenswrapper[4667]: E0131 03:48:31.943740 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 03:48:47.943706565 +0000 UTC m=+51.460042054 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:48:31 crc kubenswrapper[4667]: E0131 03:48:31.943775 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 03:48:47.943762886 +0000 UTC m=+51.460098385 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.961487 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.961541 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.961561 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.961586 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:31 crc kubenswrapper[4667]: I0131 03:48:31.961604 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:31Z","lastTransitionTime":"2026-01-31T03:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.064147 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.064208 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.064218 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.064240 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.064251 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:32Z","lastTransitionTime":"2026-01-31T03:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.167503 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.167566 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.167577 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.167602 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.167616 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:32Z","lastTransitionTime":"2026-01-31T03:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.232021 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 23:36:18.525679249 +0000 UTC Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.271541 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.271591 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.271616 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.271643 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.271658 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:32Z","lastTransitionTime":"2026-01-31T03:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.281629 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.281663 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.281671 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:32 crc kubenswrapper[4667]: E0131 03:48:32.281937 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:32 crc kubenswrapper[4667]: E0131 03:48:32.282128 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:32 crc kubenswrapper[4667]: E0131 03:48:32.282386 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.347978 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.348037 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.348050 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.348071 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.348086 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:32Z","lastTransitionTime":"2026-01-31T03:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:32 crc kubenswrapper[4667]: E0131 03:48:32.360659 4667 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b790e77-6566-44ce-a51f-ed9234cccb89\\\",\\\"systemUUID\\\":\\\"53d28e89-fb25-47fd-9db4-43074284604e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:32Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.365215 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.365272 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.365286 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.365311 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.365333 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:32Z","lastTransitionTime":"2026-01-31T03:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:32 crc kubenswrapper[4667]: E0131 03:48:32.381900 4667 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b790e77-6566-44ce-a51f-ed9234cccb89\\\",\\\"systemUUID\\\":\\\"53d28e89-fb25-47fd-9db4-43074284604e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:32Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.386610 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.386654 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.386666 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.386700 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.386720 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:32Z","lastTransitionTime":"2026-01-31T03:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:32 crc kubenswrapper[4667]: E0131 03:48:32.406288 4667 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b790e77-6566-44ce-a51f-ed9234cccb89\\\",\\\"systemUUID\\\":\\\"53d28e89-fb25-47fd-9db4-43074284604e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:32Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.411169 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.411227 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.411244 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.411272 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.411292 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:32Z","lastTransitionTime":"2026-01-31T03:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:32 crc kubenswrapper[4667]: E0131 03:48:32.458778 4667 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b790e77-6566-44ce-a51f-ed9234cccb89\\\",\\\"systemUUID\\\":\\\"53d28e89-fb25-47fd-9db4-43074284604e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:32Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.463285 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.463328 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.463349 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.463368 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.463379 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:32Z","lastTransitionTime":"2026-01-31T03:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:32 crc kubenswrapper[4667]: E0131 03:48:32.478027 4667 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b790e77-6566-44ce-a51f-ed9234cccb89\\\",\\\"systemUUID\\\":\\\"53d28e89-fb25-47fd-9db4-43074284604e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:32Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:32 crc kubenswrapper[4667]: E0131 03:48:32.478186 4667 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.480031 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.480058 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.480071 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.480091 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.480105 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:32Z","lastTransitionTime":"2026-01-31T03:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.540557 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jhj5n_3d685ba5-5ff5-4e74-8d02-99a233fc6c9b/ovnkube-controller/0.log" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.544152 4667 generic.go:334] "Generic (PLEG): container finished" podID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerID="c30a245617d1b880fc338f1a0a001f477309db4d1e047031d4f3f169e499f46b" exitCode=1 Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.544197 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" event={"ID":"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b","Type":"ContainerDied","Data":"c30a245617d1b880fc338f1a0a001f477309db4d1e047031d4f3f169e499f46b"} Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.545111 4667 scope.go:117] "RemoveContainer" containerID="c30a245617d1b880fc338f1a0a001f477309db4d1e047031d4f3f169e499f46b" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.578193 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f495ddf-247c-4cac-979b-710342a770f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8127777e243fb5e93d9dd430fb28ccc91a340dfd6b4169ebac2f3167e5ea1660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ce78e24e1cbf1115918bbd93da300b4efa5434f21bf1a11669f702a894f64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b94e5ba5276aa39d01479c1eb697edafb939d0e62ec593eed1628e7735e95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e068f8011041fbb83af5bf15d9f856fb111b3fd48d3707507df895249b125646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586bfc35d3a6f331a069b76d004135156f1b13db4afcf14f1404cba6c4ec3627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:32Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.590541 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.590583 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.590615 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.590632 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.590644 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:32Z","lastTransitionTime":"2026-01-31T03:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.595260 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af345b03-7933-405e-9918-4dfa4559aba8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572c8933d715b77d472cb5f4c1e3c78d3a5d9dd6857a061f4db5292274041429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93540db06524b42380aa14ebbb64ece6e98cf8104ccc5930d58ae980e41d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fad2057c1b38b9a7628137d033413b768ea2ff18e1ece27c3db4f9279009ad9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46df3e9a1466ef303cf6f7c703ee28b993ea1ad08bdc870c4298be0ba0804d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:32Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.613383 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c30a245617d1b880fc338f1a0a001f477309db4d1e047031d4f3f169e499f46b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c30a245617d1b880fc338f1a0a001f477309db4d1e047031d4f3f169e499f46b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"message\\\":\\\"formers/factory.go:160\\\\nI0131 03:48:32.404940 5871 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 03:48:32.405187 5871 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 03:48:32.405447 5871 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 03:48:32.405483 5871 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 03:48:32.405524 5871 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 03:48:32.405555 5871 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 03:48:32.405606 5871 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 03:48:32.405659 5871 factory.go:656] Stopping watch factory\\\\nI0131 03:48:32.405700 5871 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 03:48:32.405735 5871 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 03:48:32.405769 5871 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 03:48:32.405805 5871 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 03:48:32.405792 5871 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 03:48:32.406612 5871 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhj5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:32Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.629209 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e8e5fbf5b62418d8b08ccaafaf9f565b19d0d1ab8dc1ad4151af14790cf4aa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:32Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.643936 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50870207-38dd-40d0-8a53-0eaa3af9d1fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1c7d3d73b43c4c32aba4ba0704c399d72ff80eff878183b5791be243b17bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:32Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.656155 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97e83040-6e53-4c9c-afda-c21bee92d1b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5d85015202ca538e52ac5ea41e417dd6c76f81b7191007983ec9bf7fde68eb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292wp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:32Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.672286 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10ccda3-d9b2-4d01-897a-8498aee530b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 03:48:15.785649 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 03:48:15.786510 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:48:15.790183 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1119535395/tls.crt::/tmp/serving-cert-1119535395/tls.key\\\\\\\"\\\\nI0131 03:48:16.086916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 03:48:16.089052 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 03:48:16.089068 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 03:48:16.089086 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 03:48:16.089091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 03:48:16.097787 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 03:48:16.097804 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 03:48:16.097818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 03:48:16.097822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 03:48:16.097825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 03:48:16.098030 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 03:48:16.100791 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:32Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.687526 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:32Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.693264 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.693337 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.693350 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.693368 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.693407 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:32Z","lastTransitionTime":"2026-01-31T03:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.700407 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:32Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.712387 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7daf4c78db3e0b9f6629c1ae75a3dad90a19d8f830bc4e3db8b48c852b3485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:32Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.729189 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523e97dbbec93313d682bbe37cf3b8cf49936d91c8f60915bf1d8849bd53f4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1730e8905dbea5ca3056d2002abe78755bdca22f3fbd66a11bb6c000b2289945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:32Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.742774 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:32Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.757437 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b103bbd2-fb5d-4b2a-8b01-c32f699757df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9ff867bc008c324ad624ff71dcbf4f93b48146483c828ce43d1c10de40b0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://298f76d02f4ede118feca9fc2d4c9c073e2331174dcf673208ed96478b74232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9b7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:32Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.771079 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ns977" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57dcb541-6b8f-4730-9fd8-7ce27870e3a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559ff674832b9bb990309a535c9afb11a4f629b263495bc86311c24730b1a8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccvwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ns977\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:32Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.785063 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd764" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b069c8d1-f785-4509-8ee6-7d44525bdc89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3014a6072d180863fd8be274b221dc47c9cd792188b8bc80621db1892ffdf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n8wnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd764\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:32Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.796602 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.796696 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.796711 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.797024 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.797057 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:32Z","lastTransitionTime":"2026-01-31T03:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.900421 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.900497 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.900515 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.900544 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:32 crc kubenswrapper[4667]: I0131 03:48:32.900563 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:32Z","lastTransitionTime":"2026-01-31T03:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.003195 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.003246 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.003255 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.003278 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.003289 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:33Z","lastTransitionTime":"2026-01-31T03:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.105915 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.105965 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.105976 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.105995 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.106007 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:33Z","lastTransitionTime":"2026-01-31T03:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.208724 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.208769 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.208783 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.208804 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.208817 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:33Z","lastTransitionTime":"2026-01-31T03:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.232189 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 17:02:05.358051405 +0000 UTC Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.311389 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.311428 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.311440 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.311459 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.311471 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:33Z","lastTransitionTime":"2026-01-31T03:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.414384 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.414439 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.414453 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.414475 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.414492 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:33Z","lastTransitionTime":"2026-01-31T03:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.517685 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.517717 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.517727 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.517742 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.517752 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:33Z","lastTransitionTime":"2026-01-31T03:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.549455 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jhj5n_3d685ba5-5ff5-4e74-8d02-99a233fc6c9b/ovnkube-controller/1.log" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.550056 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jhj5n_3d685ba5-5ff5-4e74-8d02-99a233fc6c9b/ovnkube-controller/0.log" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.553104 4667 generic.go:334] "Generic (PLEG): container finished" podID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerID="4dcfd6f322ee75abb3f8338832201628ffa44f71fee53f35735ce5072f79ddd0" exitCode=1 Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.553145 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" event={"ID":"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b","Type":"ContainerDied","Data":"4dcfd6f322ee75abb3f8338832201628ffa44f71fee53f35735ce5072f79ddd0"} Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.553186 4667 scope.go:117] "RemoveContainer" containerID="c30a245617d1b880fc338f1a0a001f477309db4d1e047031d4f3f169e499f46b" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.553991 4667 scope.go:117] "RemoveContainer" containerID="4dcfd6f322ee75abb3f8338832201628ffa44f71fee53f35735ce5072f79ddd0" Jan 31 03:48:33 crc kubenswrapper[4667]: E0131 03:48:33.554281 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jhj5n_openshift-ovn-kubernetes(3d685ba5-5ff5-4e74-8d02-99a233fc6c9b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.570821 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e8e5fbf5b62418d8b08ccaafaf9f565b19d0d1ab8dc1ad4151af14790cf4aa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:33Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.585743 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:33Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.597904 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7daf4c78db3e0b9f6629c1ae75a3dad90a19d8f830bc4e3db8b48c852b3485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:33Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.615027 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523e97dbbec93313d682bbe37cf3b8cf49936d91c8f60915bf1d8849bd53f4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1730e8905dbea5ca3056d2002abe78755bdca22f3fbd66a11bb6c000b2289945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:33Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.619791 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.619834 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.619864 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.619884 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.619900 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:33Z","lastTransitionTime":"2026-01-31T03:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.639100 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50870207-38dd-40d0-8a53-0eaa3af9d1fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1c7d3d73b43c4c32aba4ba0704c399d72ff80eff878183b5791be243b17bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:33Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.651829 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97e83040-6e53-4c9c-afda-c21bee92d1b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5d85015202ca538e52ac5ea41e417dd6c76f81b7191007983ec9bf7fde68eb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292wp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:33Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.675735 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10ccda3-d9b2-4d01-897a-8498aee530b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 03:48:15.785649 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 03:48:15.786510 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:48:15.790183 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1119535395/tls.crt::/tmp/serving-cert-1119535395/tls.key\\\\\\\"\\\\nI0131 03:48:16.086916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 03:48:16.089052 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 03:48:16.089068 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 03:48:16.089086 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 03:48:16.089091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 03:48:16.097787 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 03:48:16.097804 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 03:48:16.097818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 03:48:16.097822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 03:48:16.097825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 03:48:16.098030 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 03:48:16.100791 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:33Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.690323 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:33Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.703562 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ns977" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57dcb541-6b8f-4730-9fd8-7ce27870e3a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559ff674832b9bb990309a535c9afb11a4f629b263495bc86311c24730b1a8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccvwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ns977\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:33Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.722208 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd764" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b069c8d1-f785-4509-8ee6-7d44525bdc89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3014a6072d180863fd8be274b221dc47c9cd792188b8bc80621db1892ffdf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n8wnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd764\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:33Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.722657 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.722737 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.722748 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.722785 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.722796 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:33Z","lastTransitionTime":"2026-01-31T03:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.737258 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:33Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.742692 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.754996 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b103bbd2-fb5d-4b2a-8b01-c32f699757df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9ff867bc008c324ad624ff71dcbf4f93b48146483c828ce43d1c10de40b0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://298f76d02f4ede118feca9fc2d4c9c073e2331174dcf673208ed96478b74232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9b7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:33Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.780076 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dcfd6f322ee75abb3f8338832201628ffa44f71fee53f35735ce5072f79ddd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c30a245617d1b880fc338f1a0a001f477309db4d1e047031d4f3f169e499f46b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"message\\\":\\\"formers/factory.go:160\\\\nI0131 03:48:32.404940 5871 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 03:48:32.405187 5871 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 03:48:32.405447 5871 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 03:48:32.405483 5871 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 03:48:32.405524 5871 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 03:48:32.405555 5871 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 03:48:32.405606 5871 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 03:48:32.405659 5871 factory.go:656] Stopping watch factory\\\\nI0131 03:48:32.405700 5871 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 03:48:32.405735 5871 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 03:48:32.405769 5871 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 03:48:32.405805 5871 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 03:48:32.405792 5871 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 03:48:32.406612 5871 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dcfd6f322ee75abb3f8338832201628ffa44f71fee53f35735ce5072f79ddd0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"message\\\":\\\"enshift-cluster-version/cluster-version-operator for network=default are: map[]\\\\nF0131 03:48:33.432694 6023 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:33Z is after 2025-08-24T17:21:41Z]\\\\nI0131 03:48:33.432596 6023 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 03:48:33.432526 6023 model_client\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhj5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:33Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.804952 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f495ddf-247c-4cac-979b-710342a770f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8127777e243fb5e93d9dd430fb28ccc91a340dfd6b4169ebac2f3167e5ea1660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ce78e24e1cbf1115918bbd93da300b4efa5434f21bf1a11669f702a894f64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b94e5ba5276aa39d01479c1eb697edafb939d0e62ec593eed1628e7735e95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e068f8011041fbb83af5bf15d9f856fb111b3fd48d3707507df895249b125646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586bfc35d3a6f331a069b76d004135156f1b13db4afcf14f1404cba6c4ec3627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:33Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.819467 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af345b03-7933-405e-9918-4dfa4559aba8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572c8933d715b77d472cb5f4c1e3c78d3a5d9dd6857a061f4db5292274041429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93540db06524b42380aa14ebbb64ece6e98cf8104ccc5930d58ae980e41d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fad2057c1b38b9a7628137d033413b768ea2ff18e1ece27c3db4f9279009ad9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46df3e9a1466ef303cf6f7c703ee28b993ea1ad08bdc870c4298be0ba0804d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:33Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.825489 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.825575 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.825598 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.825628 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.825647 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:33Z","lastTransitionTime":"2026-01-31T03:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.840207 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:33Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.856810 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b103bbd2-fb5d-4b2a-8b01-c32f699757df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9ff867bc008c324ad624ff71dcbf4f93b48146483c828ce43d1c10de40b0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://298f76d02f4ede118feca9fc2d4c9c073e2331174dcf673208ed96478b74232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9b7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:33Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.868214 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ns977" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57dcb541-6b8f-4730-9fd8-7ce27870e3a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559ff674832b9bb990309a535c9afb11a4f629b263495bc86311c24730b1a8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccvwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ns977\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:33Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.883114 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd764" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b069c8d1-f785-4509-8ee6-7d44525bdc89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3014a6072d180863fd8be274b221dc47c9cd792188b8bc80621db1892ffdf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n8wnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd764\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:33Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.905202 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f495ddf-247c-4cac-979b-710342a770f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8127777e243fb5e93d9dd430fb28ccc91a340dfd6b4169ebac2f3167e5ea1660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ce78e24e1cbf1115918bbd93da300b4efa5434f21bf1a11669f702a894f64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b94e5ba5276aa39d01479c1eb697edafb939d0e62ec593eed1628e7735e95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e068f8011041fbb83af5bf15d9f856fb111b3fd48d3707507df895249b125646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586bfc35d3a6f331a069b76d004135156f1b13db4afcf14f1404cba6c4ec3627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:33Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.928794 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.929094 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.929165 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.929235 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.929302 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:33Z","lastTransitionTime":"2026-01-31T03:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.928944 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af345b03-7933-405e-9918-4dfa4559aba8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572c8933d715b77d472cb5f4c1e3c78d3a5d9dd6857a061f4db5292274041429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93540db06524b42380aa14ebbb64ece6e98cf8104ccc5930d58ae980e41d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fad2057c1b38b9a7628137d033413b768ea2ff18e1ece27c3db4f9279009ad9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46df3e9a1466ef303cf6f7c703ee28b993ea1ad08bdc870c4298be0ba0804d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:33Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.956995 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dcfd6f322ee75abb3f8338832201628ffa44f71fee53f35735ce5072f79ddd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c30a245617d1b880fc338f1a0a001f477309db4d1e047031d4f3f169e499f46b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"message\\\":\\\"formers/factory.go:160\\\\nI0131 03:48:32.404940 5871 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 03:48:32.405187 5871 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 03:48:32.405447 5871 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 03:48:32.405483 5871 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 03:48:32.405524 5871 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 03:48:32.405555 5871 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 03:48:32.405606 5871 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 03:48:32.405659 5871 factory.go:656] Stopping watch factory\\\\nI0131 03:48:32.405700 5871 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 03:48:32.405735 5871 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 03:48:32.405769 5871 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 03:48:32.405805 5871 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 03:48:32.405792 5871 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 03:48:32.406612 5871 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dcfd6f322ee75abb3f8338832201628ffa44f71fee53f35735ce5072f79ddd0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"message\\\":\\\"enshift-cluster-version/cluster-version-operator for network=default are: map[]\\\\nF0131 03:48:33.432694 6023 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:33Z is after 2025-08-24T17:21:41Z]\\\\nI0131 03:48:33.432596 6023 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 03:48:33.432526 6023 model_client\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhj5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:33Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.973444 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e8e5fbf5b62418d8b08ccaafaf9f565b19d0d1ab8dc1ad4151af14790cf4aa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:33Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:33 crc kubenswrapper[4667]: I0131 03:48:33.987752 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10ccda3-d9b2-4d01-897a-8498aee530b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 03:48:15.785649 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 03:48:15.786510 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:48:15.790183 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1119535395/tls.crt::/tmp/serving-cert-1119535395/tls.key\\\\\\\"\\\\nI0131 03:48:16.086916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 03:48:16.089052 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 03:48:16.089068 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 03:48:16.089086 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 03:48:16.089091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 03:48:16.097787 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 03:48:16.097804 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 03:48:16.097818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 03:48:16.097822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 03:48:16.097825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 03:48:16.098030 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 03:48:16.100791 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:33Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.001576 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:33Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.014060 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:34Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.030864 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7daf4c78db3e0b9f6629c1ae75a3dad90a19d8f830bc4e3db8b48c852b3485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:34Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.032096 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.032208 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.032273 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.032344 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.032410 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:34Z","lastTransitionTime":"2026-01-31T03:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.047638 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523e97dbbec93313d682bbe37cf3b8cf49936d91c8f60915bf1d8849bd53f4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1730e8905dbea5ca3056d2002abe78755bdca22f3fbd66a11bb6c000b2289945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:34Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.062375 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50870207-38dd-40d0-8a53-0eaa3af9d1fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1c7d3d73b43c4c32aba4ba0704c399d72ff80eff878183b5791be243b17bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:34Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.073717 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97e83040-6e53-4c9c-afda-c21bee92d1b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5d85015202ca538e52ac5ea41e417dd6c76f81b7191007983ec9bf7fde68eb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292wp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:34Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.135250 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.135646 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.135938 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.136166 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.136349 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:34Z","lastTransitionTime":"2026-01-31T03:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.233124 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 17:42:01.106934266 +0000 UTC Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.238962 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.238989 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.238997 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.239011 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.239021 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:34Z","lastTransitionTime":"2026-01-31T03:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.281496 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:34 crc kubenswrapper[4667]: E0131 03:48:34.281616 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.282114 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:34 crc kubenswrapper[4667]: E0131 03:48:34.282199 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.282251 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:34 crc kubenswrapper[4667]: E0131 03:48:34.282310 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.341432 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.341464 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.341473 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.341488 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.341498 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:34Z","lastTransitionTime":"2026-01-31T03:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.446552 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.446628 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.446646 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.446672 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.446688 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:34Z","lastTransitionTime":"2026-01-31T03:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.549953 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.550001 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.550017 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.550039 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.550057 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:34Z","lastTransitionTime":"2026-01-31T03:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.562306 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jhj5n_3d685ba5-5ff5-4e74-8d02-99a233fc6c9b/ovnkube-controller/1.log" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.652998 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.653113 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.653154 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.653191 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.653216 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:34Z","lastTransitionTime":"2026-01-31T03:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.757676 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.758398 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.758427 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.758459 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.758483 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:34Z","lastTransitionTime":"2026-01-31T03:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.862189 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.862274 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.862300 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.862333 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.862359 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:34Z","lastTransitionTime":"2026-01-31T03:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.978870 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.978940 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.978954 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.978977 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:34 crc kubenswrapper[4667]: I0131 03:48:34.978996 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:34Z","lastTransitionTime":"2026-01-31T03:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.082617 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.082679 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.082695 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.082721 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.082737 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:35Z","lastTransitionTime":"2026-01-31T03:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.185319 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.185373 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.185387 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.185407 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.185421 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:35Z","lastTransitionTime":"2026-01-31T03:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.234264 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 00:37:32.677521935 +0000 UTC Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.288606 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.288662 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.288674 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.288693 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.288708 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:35Z","lastTransitionTime":"2026-01-31T03:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.391571 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.391607 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.391618 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.391635 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.391647 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:35Z","lastTransitionTime":"2026-01-31T03:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.494255 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.494305 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.494320 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.494347 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.494360 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:35Z","lastTransitionTime":"2026-01-31T03:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.596870 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.596921 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.596937 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.596954 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.596999 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:35Z","lastTransitionTime":"2026-01-31T03:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.699314 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.699387 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.699401 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.699425 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.699440 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:35Z","lastTransitionTime":"2026-01-31T03:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.750459 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4q9qz"] Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.751042 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4q9qz" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.755920 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.761785 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.768041 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97e83040-6e53-4c9c-afda-c21bee92d1b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5d85015202ca538e52ac5ea41e417dd6c76f81b7191007983ec9bf7fde68eb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292wp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:35Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.790786 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10ccda3-d9b2-4d01-897a-8498aee530b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 03:48:15.785649 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 03:48:15.786510 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:48:15.790183 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1119535395/tls.crt::/tmp/serving-cert-1119535395/tls.key\\\\\\\"\\\\nI0131 03:48:16.086916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 03:48:16.089052 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 03:48:16.089068 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 03:48:16.089086 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 03:48:16.089091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 03:48:16.097787 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 03:48:16.097804 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 03:48:16.097818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 03:48:16.097822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 03:48:16.097825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 03:48:16.098030 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 03:48:16.100791 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:35Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.792408 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3920ffb2-08f3-440b-bc6c-319a57bbe195-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4q9qz\" (UID: \"3920ffb2-08f3-440b-bc6c-319a57bbe195\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4q9qz" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.792476 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3920ffb2-08f3-440b-bc6c-319a57bbe195-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4q9qz\" (UID: \"3920ffb2-08f3-440b-bc6c-319a57bbe195\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4q9qz" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.792706 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3920ffb2-08f3-440b-bc6c-319a57bbe195-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4q9qz\" (UID: \"3920ffb2-08f3-440b-bc6c-319a57bbe195\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4q9qz" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.792886 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlwd5\" (UniqueName: \"kubernetes.io/projected/3920ffb2-08f3-440b-bc6c-319a57bbe195-kube-api-access-tlwd5\") pod \"ovnkube-control-plane-749d76644c-4q9qz\" (UID: \"3920ffb2-08f3-440b-bc6c-319a57bbe195\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4q9qz" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.801561 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.801609 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.801622 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.801642 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.801656 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:35Z","lastTransitionTime":"2026-01-31T03:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.809440 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:35Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.829269 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:35Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.844869 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7daf4c78db3e0b9f6629c1ae75a3dad90a19d8f830bc4e3db8b48c852b3485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:35Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.866991 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523e97dbbec93313d682bbe37cf3b8cf49936d91c8f60915bf1d8849bd53f4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1730e8905dbea5ca3056d2002abe78755bdca22f3fbd66a11bb6c000b2289945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:35Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.889001 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50870207-38dd-40d0-8a53-0eaa3af9d1fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1c7d3d73b43c4c32aba4ba0704c399d72ff80eff878183b5791be243b17bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:35Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.893994 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3920ffb2-08f3-440b-bc6c-319a57bbe195-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4q9qz\" (UID: \"3920ffb2-08f3-440b-bc6c-319a57bbe195\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4q9qz" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.894075 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3920ffb2-08f3-440b-bc6c-319a57bbe195-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4q9qz\" (UID: \"3920ffb2-08f3-440b-bc6c-319a57bbe195\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4q9qz" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.894164 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3920ffb2-08f3-440b-bc6c-319a57bbe195-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4q9qz\" (UID: \"3920ffb2-08f3-440b-bc6c-319a57bbe195\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4q9qz" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.894231 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlwd5\" (UniqueName: \"kubernetes.io/projected/3920ffb2-08f3-440b-bc6c-319a57bbe195-kube-api-access-tlwd5\") pod \"ovnkube-control-plane-749d76644c-4q9qz\" (UID: \"3920ffb2-08f3-440b-bc6c-319a57bbe195\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4q9qz" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.894983 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3920ffb2-08f3-440b-bc6c-319a57bbe195-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4q9qz\" (UID: \"3920ffb2-08f3-440b-bc6c-319a57bbe195\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4q9qz" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.895325 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3920ffb2-08f3-440b-bc6c-319a57bbe195-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4q9qz\" (UID: \"3920ffb2-08f3-440b-bc6c-319a57bbe195\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4q9qz" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.904892 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3920ffb2-08f3-440b-bc6c-319a57bbe195-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4q9qz\" (UID: \"3920ffb2-08f3-440b-bc6c-319a57bbe195\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4q9qz" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.904927 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.905007 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.905024 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.905052 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.905070 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:35Z","lastTransitionTime":"2026-01-31T03:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.912336 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:35Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.924718 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlwd5\" (UniqueName: \"kubernetes.io/projected/3920ffb2-08f3-440b-bc6c-319a57bbe195-kube-api-access-tlwd5\") pod \"ovnkube-control-plane-749d76644c-4q9qz\" (UID: \"3920ffb2-08f3-440b-bc6c-319a57bbe195\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4q9qz" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.931950 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b103bbd2-fb5d-4b2a-8b01-c32f699757df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9ff867bc008c324ad624ff71dcbf4f93b48146483c828ce43d1c10de40b0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://298f76d02f4ede118feca9fc2d4c9c073e2331174dcf673208ed96478b74232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9b7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:35Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.946612 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ns977" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57dcb541-6b8f-4730-9fd8-7ce27870e3a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559ff674832b9bb990309a535c9afb11a4f629b263495bc86311c24730b1a8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccvwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ns977\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:35Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.963502 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd764" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b069c8d1-f785-4509-8ee6-7d44525bdc89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3014a6072d180863fd8be274b221dc47c9cd792188b8bc80621db1892ffdf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n8wnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd764\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:35Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:35 crc kubenswrapper[4667]: I0131 03:48:35.990131 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f495ddf-247c-4cac-979b-710342a770f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8127777e243fb5e93d9dd430fb28ccc91a340dfd6b4169ebac2f3167e5ea1660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ce78e24e1cbf1115918bbd93da300b4efa5434f21bf1a11669f702a894f64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b94e5ba5276aa39d01479c1eb697edafb939d0e62ec593eed1628e7735e95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e068f8011041fbb83af5bf15d9f856fb111b3fd48d3707507df895249b125646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586bfc35d3a6f331a069b76d004135156f1b13db4afcf14f1404cba6c4ec3627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:35Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.008366 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.008423 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.008437 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.008461 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.008480 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:36Z","lastTransitionTime":"2026-01-31T03:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.009970 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af345b03-7933-405e-9918-4dfa4559aba8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572c8933d715b77d472cb5f4c1e3c78d3a5d9dd6857a061f4db5292274041429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93540db06524b42380aa14ebbb64ece6e98cf8104ccc5930d58ae980e41d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fad2057c1b38b9a7628137d033413b768ea2ff18e1ece27c3db4f9279009ad9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46df3e9a1466ef303cf6f7c703ee28b993ea1ad08bdc870c4298be0ba0804d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:36Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.042831 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dcfd6f322ee75abb3f8338832201628ffa44f71fee53f35735ce5072f79ddd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c30a245617d1b880fc338f1a0a001f477309db4d1e047031d4f3f169e499f46b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"message\\\":\\\"formers/factory.go:160\\\\nI0131 03:48:32.404940 5871 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 03:48:32.405187 5871 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 03:48:32.405447 5871 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 03:48:32.405483 5871 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 03:48:32.405524 5871 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 03:48:32.405555 5871 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 03:48:32.405606 5871 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 03:48:32.405659 5871 factory.go:656] Stopping watch factory\\\\nI0131 03:48:32.405700 5871 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 03:48:32.405735 5871 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 03:48:32.405769 5871 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 03:48:32.405805 5871 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 03:48:32.405792 5871 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 03:48:32.406612 5871 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dcfd6f322ee75abb3f8338832201628ffa44f71fee53f35735ce5072f79ddd0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"message\\\":\\\"enshift-cluster-version/cluster-version-operator for network=default are: map[]\\\\nF0131 03:48:33.432694 6023 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:33Z is after 2025-08-24T17:21:41Z]\\\\nI0131 03:48:33.432596 6023 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 03:48:33.432526 6023 model_client\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhj5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:36Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.060510 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4q9qz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3920ffb2-08f3-440b-bc6c-319a57bbe195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlwd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlwd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4q9qz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:36Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.064877 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4q9qz" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.088646 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e8e5fbf5b62418d8b08ccaafaf9f565b19d0d1ab8dc1ad4151af14790cf4aa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:36Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.111392 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.111463 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.111483 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.111513 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.111534 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:36Z","lastTransitionTime":"2026-01-31T03:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.213776 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.213832 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.213875 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.213898 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.213914 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:36Z","lastTransitionTime":"2026-01-31T03:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.235078 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 18:32:22.785193938 +0000 UTC Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.280807 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.280830 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.280940 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:36 crc kubenswrapper[4667]: E0131 03:48:36.281106 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:36 crc kubenswrapper[4667]: E0131 03:48:36.281760 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:36 crc kubenswrapper[4667]: E0131 03:48:36.281876 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.316904 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.316955 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.316969 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.316988 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.317000 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:36Z","lastTransitionTime":"2026-01-31T03:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.419745 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.419794 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.419807 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.419829 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.419863 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:36Z","lastTransitionTime":"2026-01-31T03:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.522796 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.522832 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.522864 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.522882 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.522894 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:36Z","lastTransitionTime":"2026-01-31T03:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.576700 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4q9qz" event={"ID":"3920ffb2-08f3-440b-bc6c-319a57bbe195","Type":"ContainerStarted","Data":"4e3a943070029bd6e98682f2a4b3cfc0ab26dc2e9e7ab5179a60316923dbad33"} Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.576755 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4q9qz" event={"ID":"3920ffb2-08f3-440b-bc6c-319a57bbe195","Type":"ContainerStarted","Data":"2e27cd88c349d4786018ab6ae21d45b22cdb95054c0b188bdce8cf97c53c09c5"} Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.576769 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4q9qz" event={"ID":"3920ffb2-08f3-440b-bc6c-319a57bbe195","Type":"ContainerStarted","Data":"e224235fbc1af3cc688442554f1e90553b941e97c20ea4b36165b449882b6db4"} Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.590828 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e8e5fbf5b62418d8b08ccaafaf9f565b19d0d1ab8dc1ad4151af14790cf4aa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:36Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.620456 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10ccda3-d9b2-4d01-897a-8498aee530b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 03:48:15.785649 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 03:48:15.786510 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:48:15.790183 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1119535395/tls.crt::/tmp/serving-cert-1119535395/tls.key\\\\\\\"\\\\nI0131 03:48:16.086916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 03:48:16.089052 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 03:48:16.089068 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 03:48:16.089086 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 03:48:16.089091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 03:48:16.097787 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 03:48:16.097804 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 03:48:16.097818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 03:48:16.097822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 03:48:16.097825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 03:48:16.098030 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 03:48:16.100791 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:36Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.625149 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.625196 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.625212 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.625236 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.625252 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:36Z","lastTransitionTime":"2026-01-31T03:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.637067 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:36Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.651619 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:36Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.661637 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7daf4c78db3e0b9f6629c1ae75a3dad90a19d8f830bc4e3db8b48c852b3485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:36Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.673836 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523e97dbbec93313d682bbe37cf3b8cf49936d91c8f60915bf1d8849bd53f4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1730e8905dbea5ca3056d2002abe78755bdca22f3fbd66a11bb6c000b2289945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:36Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.687873 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50870207-38dd-40d0-8a53-0eaa3af9d1fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1c7d3d73b43c4c32aba4ba0704c399d72ff80eff878183b5791be243b17bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:36Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.698538 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97e83040-6e53-4c9c-afda-c21bee92d1b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5d85015202ca538e52ac5ea41e417dd6c76f81b7191007983ec9bf7fde68eb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292wp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:36Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.712927 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:36Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.724235 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b103bbd2-fb5d-4b2a-8b01-c32f699757df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9ff867bc008c324ad624ff71dcbf4f93b48146483c828ce43d1c10de40b0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://298f76d02f4ede118feca9fc2d4c9c073e2331174dcf673208ed96478b74232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9b7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:36Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.727674 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.727715 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.727724 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.727740 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.727783 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:36Z","lastTransitionTime":"2026-01-31T03:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.733203 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ns977" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57dcb541-6b8f-4730-9fd8-7ce27870e3a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559ff674832b9bb990309a535c9afb11a4f629b263495bc86311c24730b1a8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccvwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ns977\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:36Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.746189 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd764" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b069c8d1-f785-4509-8ee6-7d44525bdc89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3014a6072d180863fd8be274b221dc47c9cd792188b8bc80621db1892ffdf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n8wnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd764\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:36Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.765132 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f495ddf-247c-4cac-979b-710342a770f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8127777e243fb5e93d9dd430fb28ccc91a340dfd6b4169ebac2f3167e5ea1660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ce78e24e1cbf1115918bbd93da300b4efa5434f21bf1a11669f702a894f64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b94e5ba5276aa39d01479c1eb697edafb939d0e62ec593eed1628e7735e95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e068f8011041fbb83af5bf15d9f856fb111b3fd48d3707507df895249b125646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586bfc35d3a6f331a069b76d004135156f1b13db4afcf14f1404cba6c4ec3627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:36Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.776397 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af345b03-7933-405e-9918-4dfa4559aba8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572c8933d715b77d472cb5f4c1e3c78d3a5d9dd6857a061f4db5292274041429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93540db06524b42380aa14ebbb64ece6e98cf8104ccc5930d58ae980e41d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fad2057c1b38b9a7628137d033413b768ea2ff18e1ece27c3db4f9279009ad9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46df3e9a1466ef303cf6f7c703ee28b993ea1ad08bdc870c4298be0ba0804d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:36Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.792328 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dcfd6f322ee75abb3f8338832201628ffa44f71fee53f35735ce5072f79ddd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c30a245617d1b880fc338f1a0a001f477309db4d1e047031d4f3f169e499f46b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"message\\\":\\\"formers/factory.go:160\\\\nI0131 03:48:32.404940 5871 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 03:48:32.405187 5871 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 03:48:32.405447 5871 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 03:48:32.405483 5871 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 03:48:32.405524 5871 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 03:48:32.405555 5871 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 03:48:32.405606 5871 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 03:48:32.405659 5871 factory.go:656] Stopping watch factory\\\\nI0131 03:48:32.405700 5871 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 03:48:32.405735 5871 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 03:48:32.405769 5871 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 03:48:32.405805 5871 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 03:48:32.405792 5871 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 03:48:32.406612 5871 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dcfd6f322ee75abb3f8338832201628ffa44f71fee53f35735ce5072f79ddd0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"message\\\":\\\"enshift-cluster-version/cluster-version-operator for network=default are: map[]\\\\nF0131 03:48:33.432694 6023 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:33Z is after 2025-08-24T17:21:41Z]\\\\nI0131 03:48:33.432596 6023 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 03:48:33.432526 6023 model_client\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhj5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:36Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.802482 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4q9qz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3920ffb2-08f3-440b-bc6c-319a57bbe195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27cd88c349d4786018ab6ae21d45b22cdb95054c0b188bdce8cf97c53c09c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlwd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e3a943070029bd6e98682f2a4b3cfc0ab26dc2e9e7ab5179a60316923dbad33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlwd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4q9qz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:36Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.830470 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.830506 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.830518 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.830535 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.830545 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:36Z","lastTransitionTime":"2026-01-31T03:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.933202 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.933262 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.933271 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.933322 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:36 crc kubenswrapper[4667]: I0131 03:48:36.933338 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:36Z","lastTransitionTime":"2026-01-31T03:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.036305 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.036349 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.036357 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.036376 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.036386 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:37Z","lastTransitionTime":"2026-01-31T03:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.139602 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.139629 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.139637 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.139649 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.139657 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:37Z","lastTransitionTime":"2026-01-31T03:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.235938 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 21:38:08.150461589 +0000 UTC Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.241959 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.242002 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.242015 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.242033 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.242044 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:37Z","lastTransitionTime":"2026-01-31T03:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.295041 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af345b03-7933-405e-9918-4dfa4559aba8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572c8933d715b77d472cb5f4c1e3c78d3a5d9dd6857a061f4db5292274041429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93540db06524b42380aa14ebbb64ece6e98cf8104ccc5930d58ae980e41d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fad2057c1b38b9a7628137d033413b768ea2ff18e1ece27c3db4f9279009ad9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46df3e9a1466ef303cf6f7c703ee28b993ea1ad08bdc870c4298be0ba0804d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.328511 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dcfd6f322ee75abb3f8338832201628ffa44f71fee53f35735ce5072f79ddd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c30a245617d1b880fc338f1a0a001f477309db4d1e047031d4f3f169e499f46b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"message\\\":\\\"formers/factory.go:160\\\\nI0131 03:48:32.404940 5871 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 03:48:32.405187 5871 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 03:48:32.405447 5871 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 03:48:32.405483 5871 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 03:48:32.405524 5871 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 03:48:32.405555 5871 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 03:48:32.405606 5871 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 03:48:32.405659 5871 factory.go:656] Stopping watch factory\\\\nI0131 03:48:32.405700 5871 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 03:48:32.405735 5871 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 03:48:32.405769 5871 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 03:48:32.405805 5871 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 03:48:32.405792 5871 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 03:48:32.406612 5871 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dcfd6f322ee75abb3f8338832201628ffa44f71fee53f35735ce5072f79ddd0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"message\\\":\\\"enshift-cluster-version/cluster-version-operator for network=default are: map[]\\\\nF0131 03:48:33.432694 6023 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:33Z is after 2025-08-24T17:21:41Z]\\\\nI0131 03:48:33.432596 6023 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 03:48:33.432526 6023 model_client\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhj5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.344752 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.345003 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.345147 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.345279 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.345418 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:37Z","lastTransitionTime":"2026-01-31T03:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.346295 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4q9qz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3920ffb2-08f3-440b-bc6c-319a57bbe195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27cd88c349d4786018ab6ae21d45b22cdb95054c0b188bdce8cf97c53c09c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlwd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e3a943070029bd6e98682f2a4b3cfc0ab26dc2e9e7ab5179a60316923dbad33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlwd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4q9qz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.377143 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f495ddf-247c-4cac-979b-710342a770f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8127777e243fb5e93d9dd430fb28ccc91a340dfd6b4169ebac2f3167e5ea1660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ce78e24e1cbf1115918bbd93da300b4efa5434f21bf1a11669f702a894f64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b94e5ba5276aa39d01479c1eb697edafb939d0e62ec593eed1628e7735e95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e068f8011041fbb83af5bf15d9f856fb111b3fd48d3707507df895249b125646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586bfc35d3a6f331a069b76d004135156f1b13db4afcf14f1404cba6c4ec3627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.392930 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e8e5fbf5b62418d8b08ccaafaf9f565b19d0d1ab8dc1ad4151af14790cf4aa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.408949 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.422347 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.436278 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7daf4c78db3e0b9f6629c1ae75a3dad90a19d8f830bc4e3db8b48c852b3485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.447944 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.447967 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.447977 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.447992 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.448002 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:37Z","lastTransitionTime":"2026-01-31T03:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.456299 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523e97dbbec93313d682bbe37cf3b8cf49936d91c8f60915bf1d8849bd53f4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1730e8905dbea5ca3056d2002abe78755bdca22f3fbd66a11bb6c000b2289945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.469439 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50870207-38dd-40d0-8a53-0eaa3af9d1fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1c7d3d73b43c4c32aba4ba0704c399d72ff80eff878183b5791be243b17bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.480460 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97e83040-6e53-4c9c-afda-c21bee92d1b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5d85015202ca538e52ac5ea41e417dd6c76f81b7191007983ec9bf7fde68eb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292wp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.504351 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10ccda3-d9b2-4d01-897a-8498aee530b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 03:48:15.785649 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 03:48:15.786510 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:48:15.790183 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1119535395/tls.crt::/tmp/serving-cert-1119535395/tls.key\\\\\\\"\\\\nI0131 03:48:16.086916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 03:48:16.089052 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 03:48:16.089068 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 03:48:16.089086 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 03:48:16.089091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 03:48:16.097787 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 03:48:16.097804 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 03:48:16.097818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 03:48:16.097822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 03:48:16.097825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 03:48:16.098030 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 03:48:16.100791 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.521870 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b103bbd2-fb5d-4b2a-8b01-c32f699757df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9ff867bc008c324ad624ff71dcbf4f93b48146483c828ce43d1c10de40b0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://298f76d02f4ede118feca9fc2d4c9c073e2331174dcf673208ed96478b74232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9b7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.533649 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ns977" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57dcb541-6b8f-4730-9fd8-7ce27870e3a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559ff674832b9bb990309a535c9afb11a4f629b263495bc86311c24730b1a8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccvwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ns977\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.545066 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd764" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b069c8d1-f785-4509-8ee6-7d44525bdc89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3014a6072d180863fd8be274b221dc47c9cd792188b8bc80621db1892ffdf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n8wnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd764\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.549652 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.549685 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.549695 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.549712 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.549724 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:37Z","lastTransitionTime":"2026-01-31T03:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.557985 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.632904 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-n5jv7"] Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.633824 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:48:37 crc kubenswrapper[4667]: E0131 03:48:37.633990 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.645198 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af345b03-7933-405e-9918-4dfa4559aba8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572c8933d715b77d472cb5f4c1e3c78d3a5d9dd6857a061f4db5292274041429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93540db06524b42380aa14ebbb64ece6e98cf8104ccc5930d58ae980e41d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fad2057c1b38b9a7628137d033413b768ea2ff18e1ece27c3db4f9279009ad9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46df3e9a1466ef303cf6f7c703ee28b993ea1ad08bdc870c4298be0ba0804d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.652304 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.652464 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.652537 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.652626 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.652690 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:37Z","lastTransitionTime":"2026-01-31T03:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.666303 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dcfd6f322ee75abb3f8338832201628ffa44f71fee53f35735ce5072f79ddd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c30a245617d1b880fc338f1a0a001f477309db4d1e047031d4f3f169e499f46b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:48:32Z\\\",\\\"message\\\":\\\"formers/factory.go:160\\\\nI0131 03:48:32.404940 5871 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 03:48:32.405187 5871 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 03:48:32.405447 5871 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 03:48:32.405483 5871 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 03:48:32.405524 5871 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 03:48:32.405555 5871 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 03:48:32.405606 5871 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 03:48:32.405659 5871 factory.go:656] Stopping watch factory\\\\nI0131 03:48:32.405700 5871 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 03:48:32.405735 5871 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 03:48:32.405769 5871 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 03:48:32.405805 5871 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 03:48:32.405792 5871 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 03:48:32.406612 5871 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dcfd6f322ee75abb3f8338832201628ffa44f71fee53f35735ce5072f79ddd0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"message\\\":\\\"enshift-cluster-version/cluster-version-operator for network=default are: map[]\\\\nF0131 03:48:33.432694 6023 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:33Z is after 2025-08-24T17:21:41Z]\\\\nI0131 03:48:33.432596 6023 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 03:48:33.432526 6023 model_client\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhj5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.682365 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4q9qz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3920ffb2-08f3-440b-bc6c-319a57bbe195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27cd88c349d4786018ab6ae21d45b22cdb95054c0b188bdce8cf97c53c09c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlwd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e3a943070029bd6e98682f2a4b3cfc0ab26dc2e9e7ab5179a60316923dbad33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlwd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4q9qz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.702887 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f495ddf-247c-4cac-979b-710342a770f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8127777e243fb5e93d9dd430fb28ccc91a340dfd6b4169ebac2f3167e5ea1660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ce78e24e1cbf1115918bbd93da300b4efa5434f21bf1a11669f702a894f64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b94e5ba5276aa39d01479c1eb697edafb939d0e62ec593eed1628e7735e95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e068f8011041fbb83af5bf15d9f856fb111b3fd48d3707507df895249b125646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586bfc35d3a6f331a069b76d004135156f1b13db4afcf14f1404cba6c4ec3627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.710016 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp2s8\" (UniqueName: \"kubernetes.io/projected/4a24385e-62ca-4a82-8995-9f20115931c4-kube-api-access-jp2s8\") pod \"network-metrics-daemon-n5jv7\" (UID: \"4a24385e-62ca-4a82-8995-9f20115931c4\") " pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.710109 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a24385e-62ca-4a82-8995-9f20115931c4-metrics-certs\") pod \"network-metrics-daemon-n5jv7\" (UID: \"4a24385e-62ca-4a82-8995-9f20115931c4\") " pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.717788 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e8e5fbf5b62418d8b08ccaafaf9f565b19d0d1ab8dc1ad4151af14790cf4aa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.729773 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.742152 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.754402 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7daf4c78db3e0b9f6629c1ae75a3dad90a19d8f830bc4e3db8b48c852b3485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.756093 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.756135 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.756149 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.756168 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.756181 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:37Z","lastTransitionTime":"2026-01-31T03:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.768636 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523e97dbbec93313d682bbe37cf3b8cf49936d91c8f60915bf1d8849bd53f4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1730e8905dbea5ca3056d2002abe78755bdca22f3fbd66a11bb6c000b2289945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.784205 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50870207-38dd-40d0-8a53-0eaa3af9d1fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1c7d3d73b43c4c32aba4ba0704c399d72ff80eff878183b5791be243b17bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.797338 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97e83040-6e53-4c9c-afda-c21bee92d1b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5d85015202ca538e52ac5ea41e417dd6c76f81b7191007983ec9bf7fde68eb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292wp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.810579 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp2s8\" (UniqueName: \"kubernetes.io/projected/4a24385e-62ca-4a82-8995-9f20115931c4-kube-api-access-jp2s8\") pod \"network-metrics-daemon-n5jv7\" (UID: \"4a24385e-62ca-4a82-8995-9f20115931c4\") " pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.810660 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a24385e-62ca-4a82-8995-9f20115931c4-metrics-certs\") pod \"network-metrics-daemon-n5jv7\" (UID: \"4a24385e-62ca-4a82-8995-9f20115931c4\") " pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:48:37 crc kubenswrapper[4667]: E0131 03:48:37.810774 4667 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 03:48:37 crc kubenswrapper[4667]: E0131 03:48:37.810829 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a24385e-62ca-4a82-8995-9f20115931c4-metrics-certs podName:4a24385e-62ca-4a82-8995-9f20115931c4 nodeName:}" failed. No retries permitted until 2026-01-31 03:48:38.310812852 +0000 UTC m=+41.827148151 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4a24385e-62ca-4a82-8995-9f20115931c4-metrics-certs") pod "network-metrics-daemon-n5jv7" (UID: "4a24385e-62ca-4a82-8995-9f20115931c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.815487 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10ccda3-d9b2-4d01-897a-8498aee530b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 03:48:15.785649 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 03:48:15.786510 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:48:15.790183 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1119535395/tls.crt::/tmp/serving-cert-1119535395/tls.key\\\\\\\"\\\\nI0131 03:48:16.086916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 03:48:16.089052 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 03:48:16.089068 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 03:48:16.089086 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 03:48:16.089091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 03:48:16.097787 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 03:48:16.097804 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 03:48:16.097818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 03:48:16.097822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 03:48:16.097825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 03:48:16.098030 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 03:48:16.100791 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.830728 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b103bbd2-fb5d-4b2a-8b01-c32f699757df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9ff867bc008c324ad624ff71dcbf4f93b48146483c828ce43d1c10de40b0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://298f76d02f4ede118feca9fc2d4c9c073e2331174dcf673208ed96478b74232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9b7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.832555 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp2s8\" (UniqueName: \"kubernetes.io/projected/4a24385e-62ca-4a82-8995-9f20115931c4-kube-api-access-jp2s8\") pod \"network-metrics-daemon-n5jv7\" (UID: \"4a24385e-62ca-4a82-8995-9f20115931c4\") " pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.847438 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ns977" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57dcb541-6b8f-4730-9fd8-7ce27870e3a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559ff674832b9bb990309a535c9afb11a4f629b263495bc86311c24730b1a8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccvwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ns977\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.863011 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.863279 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.863427 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.863548 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.863651 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:37Z","lastTransitionTime":"2026-01-31T03:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.864319 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd764" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b069c8d1-f785-4509-8ee6-7d44525bdc89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3014a6072d180863fd8be274b221dc47c9cd792188b8bc80621db1892ffdf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n8wnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd764\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.878460 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n5jv7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a24385e-62ca-4a82-8995-9f20115931c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp2s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp2s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n5jv7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.892126 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:37Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.966216 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.966275 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.966286 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.966300 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:37 crc kubenswrapper[4667]: I0131 03:48:37.966310 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:37Z","lastTransitionTime":"2026-01-31T03:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.072797 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.073087 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.073210 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.073311 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.073407 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:38Z","lastTransitionTime":"2026-01-31T03:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.176042 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.176255 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.176351 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.176446 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.176541 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:38Z","lastTransitionTime":"2026-01-31T03:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.237041 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 04:09:36.367596094 +0000 UTC Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.279979 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.280020 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.280028 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.280044 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.280055 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:38Z","lastTransitionTime":"2026-01-31T03:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.281402 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.281429 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.281414 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:38 crc kubenswrapper[4667]: E0131 03:48:38.281521 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:38 crc kubenswrapper[4667]: E0131 03:48:38.281618 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:38 crc kubenswrapper[4667]: E0131 03:48:38.281687 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.316621 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a24385e-62ca-4a82-8995-9f20115931c4-metrics-certs\") pod \"network-metrics-daemon-n5jv7\" (UID: \"4a24385e-62ca-4a82-8995-9f20115931c4\") " pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:48:38 crc kubenswrapper[4667]: E0131 03:48:38.316757 4667 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 03:48:38 crc kubenswrapper[4667]: E0131 03:48:38.316803 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a24385e-62ca-4a82-8995-9f20115931c4-metrics-certs podName:4a24385e-62ca-4a82-8995-9f20115931c4 nodeName:}" failed. No retries permitted until 2026-01-31 03:48:39.316788897 +0000 UTC m=+42.833124196 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4a24385e-62ca-4a82-8995-9f20115931c4-metrics-certs") pod "network-metrics-daemon-n5jv7" (UID: "4a24385e-62ca-4a82-8995-9f20115931c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.382832 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.382910 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.382918 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.382932 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.382942 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:38Z","lastTransitionTime":"2026-01-31T03:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.485326 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.485366 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.485376 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.485392 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.485402 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:38Z","lastTransitionTime":"2026-01-31T03:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.588472 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.588532 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.588544 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.588582 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.588595 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:38Z","lastTransitionTime":"2026-01-31T03:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.691260 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.691310 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.691323 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.691342 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.691356 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:38Z","lastTransitionTime":"2026-01-31T03:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.794728 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.794782 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.794795 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.794813 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.795078 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:38Z","lastTransitionTime":"2026-01-31T03:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.803167 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.804555 4667 scope.go:117] "RemoveContainer" containerID="4dcfd6f322ee75abb3f8338832201628ffa44f71fee53f35735ce5072f79ddd0" Jan 31 03:48:38 crc kubenswrapper[4667]: E0131 03:48:38.804882 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jhj5n_openshift-ovn-kubernetes(3d685ba5-5ff5-4e74-8d02-99a233fc6c9b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.817692 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e8e5fbf5b62418d8b08ccaafaf9f565b19d0d1ab8dc1ad4151af14790cf4aa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.831674 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97e83040-6e53-4c9c-afda-c21bee92d1b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5d85015202ca538e52ac5ea41e417dd6c76f81b7191007983ec9bf7fde68eb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292wp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.855486 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10ccda3-d9b2-4d01-897a-8498aee530b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 03:48:15.785649 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 03:48:15.786510 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:48:15.790183 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1119535395/tls.crt::/tmp/serving-cert-1119535395/tls.key\\\\\\\"\\\\nI0131 03:48:16.086916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 03:48:16.089052 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 03:48:16.089068 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 03:48:16.089086 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 03:48:16.089091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 03:48:16.097787 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 03:48:16.097804 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 03:48:16.097818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 03:48:16.097822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 03:48:16.097825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 03:48:16.098030 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 03:48:16.100791 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.871395 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.885057 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.898773 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.898827 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.898859 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.898879 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.898892 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:38Z","lastTransitionTime":"2026-01-31T03:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.899572 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7daf4c78db3e0b9f6629c1ae75a3dad90a19d8f830bc4e3db8b48c852b3485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.919517 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523e97dbbec93313d682bbe37cf3b8cf49936d91c8f60915bf1d8849bd53f4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1730e8905dbea5ca3056d2002abe78755bdca22f3fbd66a11bb6c000b2289945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.935200 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50870207-38dd-40d0-8a53-0eaa3af9d1fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1c7d3d73b43c4c32aba4ba0704c399d72ff80eff878183b5791be243b17bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.950065 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.961979 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b103bbd2-fb5d-4b2a-8b01-c32f699757df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9ff867bc008c324ad624ff71dcbf4f93b48146483c828ce43d1c10de40b0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://298f76d02f4ede118feca9fc2d4c9c073e2331174dcf673208ed96478b74232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9b7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.971912 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ns977" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57dcb541-6b8f-4730-9fd8-7ce27870e3a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559ff674832b9bb990309a535c9afb11a4f629b263495bc86311c24730b1a8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccvwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ns977\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.987384 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd764" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b069c8d1-f785-4509-8ee6-7d44525bdc89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3014a6072d180863fd8be274b221dc47c9cd792188b8bc80621db1892ffdf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n8wnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd764\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:38 crc kubenswrapper[4667]: I0131 03:48:38.999099 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n5jv7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a24385e-62ca-4a82-8995-9f20115931c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp2s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp2s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n5jv7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:38Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.001145 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.001264 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.001286 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.001312 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.001341 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:39Z","lastTransitionTime":"2026-01-31T03:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.019529 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f495ddf-247c-4cac-979b-710342a770f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8127777e243fb5e93d9dd430fb28ccc91a340dfd6b4169ebac2f3167e5ea1660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ce78e24e1cbf1115918bbd93da300b4efa5434f21bf1a11669f702a894f64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b94e5ba5276aa39d01479c1eb697edafb939d0e62ec593eed1628e7735e95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e068f8011041fbb83af5bf15d9f856fb111b3fd48d3707507df895249b125646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586bfc35d3a6f331a069b76d004135156f1b13db4afcf14f1404cba6c4ec3627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:39Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.042460 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af345b03-7933-405e-9918-4dfa4559aba8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572c8933d715b77d472cb5f4c1e3c78d3a5d9dd6857a061f4db5292274041429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93540db06524b42380aa14ebbb64ece6e98cf8104ccc5930d58ae980e41d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fad2057c1b38b9a7628137d033413b768ea2ff18e1ece27c3db4f9279009ad9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46df3e9a1466ef303cf6f7c703ee28b993ea1ad08bdc870c4298be0ba0804d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:39Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.071494 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dcfd6f322ee75abb3f8338832201628ffa44f71fee53f35735ce5072f79ddd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dcfd6f322ee75abb3f8338832201628ffa44f71fee53f35735ce5072f79ddd0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"message\\\":\\\"enshift-cluster-version/cluster-version-operator for network=default are: map[]\\\\nF0131 03:48:33.432694 6023 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:33Z is after 2025-08-24T17:21:41Z]\\\\nI0131 03:48:33.432596 6023 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 03:48:33.432526 6023 model_client\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jhj5n_openshift-ovn-kubernetes(3d685ba5-5ff5-4e74-8d02-99a233fc6c9b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhj5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:39Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.090246 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4q9qz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3920ffb2-08f3-440b-bc6c-319a57bbe195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27cd88c349d4786018ab6ae21d45b22cdb95054c0b188bdce8cf97c53c09c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlwd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e3a943070029bd6e98682f2a4b3cfc0ab26dc2e9e7ab5179a60316923dbad33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlwd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4q9qz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:39Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.104487 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.104541 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.104554 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.104574 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.104586 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:39Z","lastTransitionTime":"2026-01-31T03:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.207821 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.207880 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.207893 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.207910 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.207920 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:39Z","lastTransitionTime":"2026-01-31T03:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.237659 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 01:12:04.128926211 +0000 UTC Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.281111 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:48:39 crc kubenswrapper[4667]: E0131 03:48:39.281510 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.310459 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.310518 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.310527 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.310539 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.310549 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:39Z","lastTransitionTime":"2026-01-31T03:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.327243 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a24385e-62ca-4a82-8995-9f20115931c4-metrics-certs\") pod \"network-metrics-daemon-n5jv7\" (UID: \"4a24385e-62ca-4a82-8995-9f20115931c4\") " pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:48:39 crc kubenswrapper[4667]: E0131 03:48:39.327493 4667 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 03:48:39 crc kubenswrapper[4667]: E0131 03:48:39.327580 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a24385e-62ca-4a82-8995-9f20115931c4-metrics-certs podName:4a24385e-62ca-4a82-8995-9f20115931c4 nodeName:}" failed. No retries permitted until 2026-01-31 03:48:41.327550536 +0000 UTC m=+44.843885875 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4a24385e-62ca-4a82-8995-9f20115931c4-metrics-certs") pod "network-metrics-daemon-n5jv7" (UID: "4a24385e-62ca-4a82-8995-9f20115931c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.414004 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.414045 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.414053 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.414067 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.414077 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:39Z","lastTransitionTime":"2026-01-31T03:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.516717 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.516779 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.516790 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.516804 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.516814 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:39Z","lastTransitionTime":"2026-01-31T03:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.620814 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.620914 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.620926 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.620946 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.620959 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:39Z","lastTransitionTime":"2026-01-31T03:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.726721 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.726771 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.726786 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.726809 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.726825 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:39Z","lastTransitionTime":"2026-01-31T03:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.829528 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.829932 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.830046 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.830152 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.830289 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:39Z","lastTransitionTime":"2026-01-31T03:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.933228 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.933269 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.933281 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.933297 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:39 crc kubenswrapper[4667]: I0131 03:48:39.933307 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:39Z","lastTransitionTime":"2026-01-31T03:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.035832 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.035911 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.035924 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.035958 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.035970 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:40Z","lastTransitionTime":"2026-01-31T03:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.138380 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.138914 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.139069 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.139244 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.139372 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:40Z","lastTransitionTime":"2026-01-31T03:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.238486 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 23:28:44.457393717 +0000 UTC Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.243035 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.243092 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.243113 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.243139 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.243157 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:40Z","lastTransitionTime":"2026-01-31T03:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.281538 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.281566 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.281611 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:40 crc kubenswrapper[4667]: E0131 03:48:40.281696 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:40 crc kubenswrapper[4667]: E0131 03:48:40.282118 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:40 crc kubenswrapper[4667]: E0131 03:48:40.282170 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.346251 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.346282 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.346290 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.346306 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.346315 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:40Z","lastTransitionTime":"2026-01-31T03:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.448825 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.448892 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.448930 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.448950 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.448965 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:40Z","lastTransitionTime":"2026-01-31T03:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.551132 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.551172 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.551184 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.551230 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.551242 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:40Z","lastTransitionTime":"2026-01-31T03:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.653562 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.653664 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.653675 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.653687 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.653700 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:40Z","lastTransitionTime":"2026-01-31T03:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.757151 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.757194 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.757205 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.757221 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.757232 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:40Z","lastTransitionTime":"2026-01-31T03:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.860311 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.860369 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.860386 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.860410 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.860428 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:40Z","lastTransitionTime":"2026-01-31T03:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.963519 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.963552 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.963561 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.963576 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:40 crc kubenswrapper[4667]: I0131 03:48:40.963585 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:40Z","lastTransitionTime":"2026-01-31T03:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.066873 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.066913 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.066924 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.066941 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.066953 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:41Z","lastTransitionTime":"2026-01-31T03:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.169410 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.169459 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.169470 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.169490 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.169503 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:41Z","lastTransitionTime":"2026-01-31T03:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.239675 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 06:02:55.397226135 +0000 UTC Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.271749 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.271808 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.271825 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.271892 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.271914 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:41Z","lastTransitionTime":"2026-01-31T03:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.282151 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:48:41 crc kubenswrapper[4667]: E0131 03:48:41.282367 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.348491 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a24385e-62ca-4a82-8995-9f20115931c4-metrics-certs\") pod \"network-metrics-daemon-n5jv7\" (UID: \"4a24385e-62ca-4a82-8995-9f20115931c4\") " pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:48:41 crc kubenswrapper[4667]: E0131 03:48:41.348726 4667 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 03:48:41 crc kubenswrapper[4667]: E0131 03:48:41.348832 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a24385e-62ca-4a82-8995-9f20115931c4-metrics-certs podName:4a24385e-62ca-4a82-8995-9f20115931c4 nodeName:}" failed. No retries permitted until 2026-01-31 03:48:45.348807846 +0000 UTC m=+48.865143145 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4a24385e-62ca-4a82-8995-9f20115931c4-metrics-certs") pod "network-metrics-daemon-n5jv7" (UID: "4a24385e-62ca-4a82-8995-9f20115931c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.374917 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.374971 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.374980 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.374993 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.375003 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:41Z","lastTransitionTime":"2026-01-31T03:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.477945 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.477995 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.478008 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.478025 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.478408 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:41Z","lastTransitionTime":"2026-01-31T03:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.581534 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.581578 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.581586 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.581602 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.581612 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:41Z","lastTransitionTime":"2026-01-31T03:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.685368 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.685412 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.685423 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.685437 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.685447 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:41Z","lastTransitionTime":"2026-01-31T03:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.787987 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.788032 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.788042 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.788060 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.788071 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:41Z","lastTransitionTime":"2026-01-31T03:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.890935 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.890978 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.890988 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.891003 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.891013 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:41Z","lastTransitionTime":"2026-01-31T03:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.993349 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.993407 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.993425 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.993450 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:41 crc kubenswrapper[4667]: I0131 03:48:41.993469 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:41Z","lastTransitionTime":"2026-01-31T03:48:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.095482 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.095628 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.095654 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.095685 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.095707 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:42Z","lastTransitionTime":"2026-01-31T03:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.198224 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.198282 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.198298 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.198322 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.198338 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:42Z","lastTransitionTime":"2026-01-31T03:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.240490 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 16:59:48.540755594 +0000 UTC Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.281617 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.281661 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.281790 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:42 crc kubenswrapper[4667]: E0131 03:48:42.281898 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:42 crc kubenswrapper[4667]: E0131 03:48:42.282168 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:42 crc kubenswrapper[4667]: E0131 03:48:42.282299 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.301335 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.301421 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.301438 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.301462 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.301479 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:42Z","lastTransitionTime":"2026-01-31T03:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.404416 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.404461 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.404472 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.404489 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.404500 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:42Z","lastTransitionTime":"2026-01-31T03:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.507454 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.507523 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.507547 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.507582 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.507610 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:42Z","lastTransitionTime":"2026-01-31T03:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.609290 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.609341 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.609356 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.609379 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.609396 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:42Z","lastTransitionTime":"2026-01-31T03:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:42 crc kubenswrapper[4667]: E0131 03:48:42.623762 4667 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b790e77-6566-44ce-a51f-ed9234cccb89\\\",\\\"systemUUID\\\":\\\"53d28e89-fb25-47fd-9db4-43074284604e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:42Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.627352 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.627384 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.627394 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.627411 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.627423 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:42Z","lastTransitionTime":"2026-01-31T03:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:42 crc kubenswrapper[4667]: E0131 03:48:42.643221 4667 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b790e77-6566-44ce-a51f-ed9234cccb89\\\",\\\"systemUUID\\\":\\\"53d28e89-fb25-47fd-9db4-43074284604e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:42Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.650110 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.650188 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.650208 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.650233 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.650252 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:42Z","lastTransitionTime":"2026-01-31T03:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:42 crc kubenswrapper[4667]: E0131 03:48:42.667672 4667 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b790e77-6566-44ce-a51f-ed9234cccb89\\\",\\\"systemUUID\\\":\\\"53d28e89-fb25-47fd-9db4-43074284604e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:42Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.672006 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.672064 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.672088 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.672121 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.672142 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:42Z","lastTransitionTime":"2026-01-31T03:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:42 crc kubenswrapper[4667]: E0131 03:48:42.692676 4667 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b790e77-6566-44ce-a51f-ed9234cccb89\\\",\\\"systemUUID\\\":\\\"53d28e89-fb25-47fd-9db4-43074284604e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:42Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.697688 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.697764 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.697782 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.697833 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.697877 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:42Z","lastTransitionTime":"2026-01-31T03:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:42 crc kubenswrapper[4667]: E0131 03:48:42.714622 4667 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b790e77-6566-44ce-a51f-ed9234cccb89\\\",\\\"systemUUID\\\":\\\"53d28e89-fb25-47fd-9db4-43074284604e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:42Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:42 crc kubenswrapper[4667]: E0131 03:48:42.714881 4667 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.716908 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.716952 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.716963 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.716980 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.716990 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:42Z","lastTransitionTime":"2026-01-31T03:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.819154 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.819242 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.819268 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.819284 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.819298 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:42Z","lastTransitionTime":"2026-01-31T03:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.922813 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.922908 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.922920 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.922956 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:42 crc kubenswrapper[4667]: I0131 03:48:42.922972 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:42Z","lastTransitionTime":"2026-01-31T03:48:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.025448 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.025518 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.025550 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.025570 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.025584 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:43Z","lastTransitionTime":"2026-01-31T03:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.128113 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.128144 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.128151 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.128166 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.128174 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:43Z","lastTransitionTime":"2026-01-31T03:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.230771 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.230820 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.230872 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.230948 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.230967 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:43Z","lastTransitionTime":"2026-01-31T03:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.241092 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 10:35:10.424423096 +0000 UTC Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.281141 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:48:43 crc kubenswrapper[4667]: E0131 03:48:43.281282 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.333237 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.333279 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.333289 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.333305 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.333315 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:43Z","lastTransitionTime":"2026-01-31T03:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.436332 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.436389 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.436406 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.436430 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.436449 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:43Z","lastTransitionTime":"2026-01-31T03:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.538659 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.538736 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.538751 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.538773 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.538786 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:43Z","lastTransitionTime":"2026-01-31T03:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.641147 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.641184 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.641192 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.641205 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.641217 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:43Z","lastTransitionTime":"2026-01-31T03:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.743929 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.743999 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.744016 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.744043 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.744061 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:43Z","lastTransitionTime":"2026-01-31T03:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.849174 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.849616 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.849689 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.849784 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.849938 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:43Z","lastTransitionTime":"2026-01-31T03:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.953165 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.953238 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.953256 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.953286 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:43 crc kubenswrapper[4667]: I0131 03:48:43.953307 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:43Z","lastTransitionTime":"2026-01-31T03:48:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.057242 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.057313 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.057329 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.057357 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.057374 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:44Z","lastTransitionTime":"2026-01-31T03:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.161007 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.161057 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.161066 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.161083 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.161094 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:44Z","lastTransitionTime":"2026-01-31T03:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.241902 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 10:11:56.594993472 +0000 UTC Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.264122 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.264297 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.264315 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.264339 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.264359 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:44Z","lastTransitionTime":"2026-01-31T03:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.281203 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:44 crc kubenswrapper[4667]: E0131 03:48:44.281371 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.281420 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.281546 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:44 crc kubenswrapper[4667]: E0131 03:48:44.281706 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:44 crc kubenswrapper[4667]: E0131 03:48:44.281803 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.367476 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.367534 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.367580 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.367613 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.367637 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:44Z","lastTransitionTime":"2026-01-31T03:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.471193 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.471263 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.471282 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.471309 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.471343 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:44Z","lastTransitionTime":"2026-01-31T03:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.574393 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.574440 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.574457 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.574481 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.574497 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:44Z","lastTransitionTime":"2026-01-31T03:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.677887 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.677955 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.677980 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.678008 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.678027 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:44Z","lastTransitionTime":"2026-01-31T03:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.781283 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.781331 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.781343 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.781363 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.781397 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:44Z","lastTransitionTime":"2026-01-31T03:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.884726 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.884835 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.884908 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.884958 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.884983 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:44Z","lastTransitionTime":"2026-01-31T03:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.988023 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.988087 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.988109 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.988134 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:44 crc kubenswrapper[4667]: I0131 03:48:44.988151 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:44Z","lastTransitionTime":"2026-01-31T03:48:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.091451 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.091513 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.091534 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.091564 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.091589 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:45Z","lastTransitionTime":"2026-01-31T03:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.194448 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.194540 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.194557 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.194613 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.194632 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:45Z","lastTransitionTime":"2026-01-31T03:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.242004 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 09:08:35.286198037 +0000 UTC Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.281786 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:48:45 crc kubenswrapper[4667]: E0131 03:48:45.281984 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.297940 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.297975 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.297988 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.298005 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.298019 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:45Z","lastTransitionTime":"2026-01-31T03:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.393660 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a24385e-62ca-4a82-8995-9f20115931c4-metrics-certs\") pod \"network-metrics-daemon-n5jv7\" (UID: \"4a24385e-62ca-4a82-8995-9f20115931c4\") " pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:48:45 crc kubenswrapper[4667]: E0131 03:48:45.393883 4667 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 03:48:45 crc kubenswrapper[4667]: E0131 03:48:45.393968 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a24385e-62ca-4a82-8995-9f20115931c4-metrics-certs podName:4a24385e-62ca-4a82-8995-9f20115931c4 nodeName:}" failed. No retries permitted until 2026-01-31 03:48:53.393943547 +0000 UTC m=+56.910278866 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4a24385e-62ca-4a82-8995-9f20115931c4-metrics-certs") pod "network-metrics-daemon-n5jv7" (UID: "4a24385e-62ca-4a82-8995-9f20115931c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.400597 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.400653 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.400673 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.400700 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.400743 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:45Z","lastTransitionTime":"2026-01-31T03:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.504211 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.504282 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.504301 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.504330 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.504349 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:45Z","lastTransitionTime":"2026-01-31T03:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.524433 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.536782 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.545513 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10ccda3-d9b2-4d01-897a-8498aee530b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 03:48:15.785649 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 03:48:15.786510 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:48:15.790183 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1119535395/tls.crt::/tmp/serving-cert-1119535395/tls.key\\\\\\\"\\\\nI0131 03:48:16.086916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 03:48:16.089052 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 03:48:16.089068 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 03:48:16.089086 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 03:48:16.089091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 03:48:16.097787 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 03:48:16.097804 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 03:48:16.097818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 03:48:16.097822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 03:48:16.097825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 03:48:16.098030 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 03:48:16.100791 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:45Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.566158 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:45Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.582247 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:45Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.602125 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7daf4c78db3e0b9f6629c1ae75a3dad90a19d8f830bc4e3db8b48c852b3485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:45Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.608729 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.609164 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.609242 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.609678 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.609760 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:45Z","lastTransitionTime":"2026-01-31T03:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.623508 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523e97dbbec93313d682bbe37cf3b8cf49936d91c8f60915bf1d8849bd53f4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1730e8905dbea5ca3056d2002abe78755bdca22f3fbd66a11bb6c000b2289945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:45Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.647999 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50870207-38dd-40d0-8a53-0eaa3af9d1fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1c7d3d73b43c4c32aba4ba0704c399d72ff80eff878183b5791be243b17bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:45Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.662386 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97e83040-6e53-4c9c-afda-c21bee92d1b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5d85015202ca538e52ac5ea41e417dd6c76f81b7191007983ec9bf7fde68eb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292wp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:45Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.682822 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:45Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.701545 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b103bbd2-fb5d-4b2a-8b01-c32f699757df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9ff867bc008c324ad624ff71dcbf4f93b48146483c828ce43d1c10de40b0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://298f76d02f4ede118feca9fc2d4c9c073e2331174dcf673208ed96478b74232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9b7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:45Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.713146 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.713209 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.713221 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.713239 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.713275 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:45Z","lastTransitionTime":"2026-01-31T03:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.717117 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ns977" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57dcb541-6b8f-4730-9fd8-7ce27870e3a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559ff674832b9bb990309a535c9afb11a4f629b263495bc86311c24730b1a8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccvwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ns977\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:45Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.734619 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd764" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b069c8d1-f785-4509-8ee6-7d44525bdc89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3014a6072d180863fd8be274b221dc47c9cd792188b8bc80621db1892ffdf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n8wnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd764\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:45Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.749339 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n5jv7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a24385e-62ca-4a82-8995-9f20115931c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp2s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp2s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n5jv7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:45Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.770269 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f495ddf-247c-4cac-979b-710342a770f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8127777e243fb5e93d9dd430fb28ccc91a340dfd6b4169ebac2f3167e5ea1660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ce78e24e1cbf1115918bbd93da300b4efa5434f21bf1a11669f702a894f64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b94e5ba5276aa39d01479c1eb697edafb939d0e62ec593eed1628e7735e95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e068f8011041fbb83af5bf15d9f856fb111b3fd48d3707507df895249b125646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586bfc35d3a6f331a069b76d004135156f1b13db4afcf14f1404cba6c4ec3627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:45Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.788004 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af345b03-7933-405e-9918-4dfa4559aba8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572c8933d715b77d472cb5f4c1e3c78d3a5d9dd6857a061f4db5292274041429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93540db06524b42380aa14ebbb64ece6e98cf8104ccc5930d58ae980e41d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fad2057c1b38b9a7628137d033413b768ea2ff18e1ece27c3db4f9279009ad9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46df3e9a1466ef303cf6f7c703ee28b993ea1ad08bdc870c4298be0ba0804d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:45Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.810106 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dcfd6f322ee75abb3f8338832201628ffa44f71fee53f35735ce5072f79ddd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dcfd6f322ee75abb3f8338832201628ffa44f71fee53f35735ce5072f79ddd0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"message\\\":\\\"enshift-cluster-version/cluster-version-operator for network=default are: map[]\\\\nF0131 03:48:33.432694 6023 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:33Z is after 2025-08-24T17:21:41Z]\\\\nI0131 03:48:33.432596 6023 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 03:48:33.432526 6023 model_client\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jhj5n_openshift-ovn-kubernetes(3d685ba5-5ff5-4e74-8d02-99a233fc6c9b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhj5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:45Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.816188 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.816244 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.816265 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.816289 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.816307 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:45Z","lastTransitionTime":"2026-01-31T03:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.827948 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4q9qz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3920ffb2-08f3-440b-bc6c-319a57bbe195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27cd88c349d4786018ab6ae21d45b22cdb95054c0b188bdce8cf97c53c09c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlwd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e3a943070029bd6e98682f2a4b3cfc0ab26dc2e9e7ab5179a60316923dbad33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlwd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4q9qz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:45Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.843323 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e8e5fbf5b62418d8b08ccaafaf9f565b19d0d1ab8dc1ad4151af14790cf4aa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:45Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.919743 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.919807 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.919819 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.919866 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:45 crc kubenswrapper[4667]: I0131 03:48:45.919881 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:45Z","lastTransitionTime":"2026-01-31T03:48:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.022299 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.022362 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.022380 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.022405 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.022424 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:46Z","lastTransitionTime":"2026-01-31T03:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.125020 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.125062 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.125071 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.125085 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.125095 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:46Z","lastTransitionTime":"2026-01-31T03:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.227505 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.227546 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.227555 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.227569 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.227578 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:46Z","lastTransitionTime":"2026-01-31T03:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.242798 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 01:47:25.990456829 +0000 UTC Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.281306 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.281381 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.281307 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:46 crc kubenswrapper[4667]: E0131 03:48:46.281571 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:46 crc kubenswrapper[4667]: E0131 03:48:46.281698 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:46 crc kubenswrapper[4667]: E0131 03:48:46.281756 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.330063 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.330116 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.330127 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.330143 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.330153 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:46Z","lastTransitionTime":"2026-01-31T03:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.433368 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.433432 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.433448 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.433473 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.433538 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:46Z","lastTransitionTime":"2026-01-31T03:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.536364 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.536423 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.536461 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.536495 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.536516 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:46Z","lastTransitionTime":"2026-01-31T03:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.639543 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.639629 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.639660 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.639692 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.639713 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:46Z","lastTransitionTime":"2026-01-31T03:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.742793 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.742874 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.742886 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.742904 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.742919 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:46Z","lastTransitionTime":"2026-01-31T03:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.847943 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.848005 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.848017 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.848036 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.848048 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:46Z","lastTransitionTime":"2026-01-31T03:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.950943 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.951014 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.951033 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.951061 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:46 crc kubenswrapper[4667]: I0131 03:48:46.951081 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:46Z","lastTransitionTime":"2026-01-31T03:48:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.054125 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.054215 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.054234 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.054272 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.054295 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:47Z","lastTransitionTime":"2026-01-31T03:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.161340 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.161401 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.161419 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.161443 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.161461 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:47Z","lastTransitionTime":"2026-01-31T03:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.243203 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 00:49:47.42726882 +0000 UTC Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.264164 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.264203 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.264215 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.264232 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.264243 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:47Z","lastTransitionTime":"2026-01-31T03:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.281288 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:48:47 crc kubenswrapper[4667]: E0131 03:48:47.281628 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.296878 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523e97dbbec93313d682bbe37cf3b8cf49936d91c8f60915bf1d8849bd53f4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1730e8905dbea5ca3056d2002abe78755bdca22f3fbd66a11bb6c000b2289945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:47Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.312306 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50870207-38dd-40d0-8a53-0eaa3af9d1fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1c7d3d73b43c4c32aba4ba0704c399d72ff80eff878183b5791be243b17bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:47Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.324047 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97e83040-6e53-4c9c-afda-c21bee92d1b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5d85015202ca538e52ac5ea41e417dd6c76f81b7191007983ec9bf7fde68eb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292wp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:47Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.337882 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10ccda3-d9b2-4d01-897a-8498aee530b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 03:48:15.785649 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 03:48:15.786510 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:48:15.790183 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1119535395/tls.crt::/tmp/serving-cert-1119535395/tls.key\\\\\\\"\\\\nI0131 03:48:16.086916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 03:48:16.089052 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 03:48:16.089068 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 03:48:16.089086 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 03:48:16.089091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 03:48:16.097787 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 03:48:16.097804 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 03:48:16.097818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 03:48:16.097822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 03:48:16.097825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 03:48:16.098030 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 03:48:16.100791 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:47Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.350375 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:47Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.363202 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:47Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.367036 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.367072 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.367085 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.367103 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.367115 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:47Z","lastTransitionTime":"2026-01-31T03:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.376917 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7daf4c78db3e0b9f6629c1ae75a3dad90a19d8f830bc4e3db8b48c852b3485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:47Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.394163 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n5jv7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a24385e-62ca-4a82-8995-9f20115931c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp2s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp2s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n5jv7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:47Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.408821 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:47Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.420794 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b103bbd2-fb5d-4b2a-8b01-c32f699757df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9ff867bc008c324ad624ff71dcbf4f93b48146483c828ce43d1c10de40b0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://298f76d02f4ede118feca9fc2d4c9c073e2331174dcf673208ed96478b74232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9b7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:47Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.431170 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ns977" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57dcb541-6b8f-4730-9fd8-7ce27870e3a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559ff674832b9bb990309a535c9afb11a4f629b263495bc86311c24730b1a8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccvwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ns977\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:47Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.444221 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd764" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b069c8d1-f785-4509-8ee6-7d44525bdc89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3014a6072d180863fd8be274b221dc47c9cd792188b8bc80621db1892ffdf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n8wnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd764\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:47Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.457116 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4q9qz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3920ffb2-08f3-440b-bc6c-319a57bbe195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27cd88c349d4786018ab6ae21d45b22cdb95054c0b188bdce8cf97c53c09c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlwd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e3a943070029bd6e98682f2a4b3cfc0ab26dc2e9e7ab5179a60316923dbad33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlwd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4q9qz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:47Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.469937 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.469988 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.470008 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.470032 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.470047 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:47Z","lastTransitionTime":"2026-01-31T03:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.480350 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f495ddf-247c-4cac-979b-710342a770f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8127777e243fb5e93d9dd430fb28ccc91a340dfd6b4169ebac2f3167e5ea1660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ce78e24e1cbf1115918bbd93da300b4efa5434f21bf1a11669f702a894f64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b94e5ba5276aa39d01479c1eb697edafb939d0e62ec593eed1628e7735e95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e068f8011041fbb83af5bf15d9f856fb111b3fd48d3707507df895249b125646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586bfc35d3a6f331a069b76d004135156f1b13db4afcf14f1404cba6c4ec3627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:47Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.491061 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af345b03-7933-405e-9918-4dfa4559aba8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572c8933d715b77d472cb5f4c1e3c78d3a5d9dd6857a061f4db5292274041429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93540db06524b42380aa14ebbb64ece6e98cf8104ccc5930d58ae980e41d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fad2057c1b38b9a7628137d033413b768ea2ff18e1ece27c3db4f9279009ad9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46df3e9a1466ef303cf6f7c703ee28b993ea1ad08bdc870c4298be0ba0804d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:47Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.502555 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa0d7266-def3-467f-8ea8-8bb9d7364385\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f2362cecfaa0886df1bf67ce2fe0bc1f9586a785228c776daa0062302ae5f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a145cfd5492e6e2c3168e54747f3699b5148950bf88dc0431699e0dc6ff4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec15f3fe2b9b1c6827bc9093c19c1fe8cba5dc2aa0db3289e0a0b7029b8b09c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901a09c39328d4cd2c2abdccd1928b5f1554d953b1271349cbdf179f93eaa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901a09c39328d4cd2c2abdccd1928b5f1554d953b1271349cbdf179f93eaa4be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:47Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.522302 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dcfd6f322ee75abb3f8338832201628ffa44f71fee53f35735ce5072f79ddd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dcfd6f322ee75abb3f8338832201628ffa44f71fee53f35735ce5072f79ddd0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"message\\\":\\\"enshift-cluster-version/cluster-version-operator for network=default are: map[]\\\\nF0131 03:48:33.432694 6023 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:33Z is after 2025-08-24T17:21:41Z]\\\\nI0131 03:48:33.432596 6023 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 03:48:33.432526 6023 model_client\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jhj5n_openshift-ovn-kubernetes(3d685ba5-5ff5-4e74-8d02-99a233fc6c9b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhj5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:47Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.536577 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e8e5fbf5b62418d8b08ccaafaf9f565b19d0d1ab8dc1ad4151af14790cf4aa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:47Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.572515 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.572567 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.572584 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.572607 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.572624 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:47Z","lastTransitionTime":"2026-01-31T03:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.676142 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.676210 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.676222 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.676261 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.676273 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:47Z","lastTransitionTime":"2026-01-31T03:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.779560 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.779604 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.779618 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.779636 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.779651 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:47Z","lastTransitionTime":"2026-01-31T03:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.882990 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.883039 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.883050 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.883068 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.883083 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:47Z","lastTransitionTime":"2026-01-31T03:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.921650 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:48:47 crc kubenswrapper[4667]: E0131 03:48:47.921753 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:49:19.921732337 +0000 UTC m=+83.438067646 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.985875 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.985933 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.985952 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.985979 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:47 crc kubenswrapper[4667]: I0131 03:48:47.985996 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:47Z","lastTransitionTime":"2026-01-31T03:48:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.023283 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.023366 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.023408 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.023488 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:48 crc kubenswrapper[4667]: E0131 03:48:48.023581 4667 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 03:48:48 crc kubenswrapper[4667]: E0131 03:48:48.023608 4667 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 03:48:48 crc kubenswrapper[4667]: E0131 03:48:48.023627 4667 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 03:48:48 crc kubenswrapper[4667]: E0131 03:48:48.023650 4667 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:48:48 crc kubenswrapper[4667]: E0131 03:48:48.023587 4667 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 03:48:48 crc kubenswrapper[4667]: E0131 03:48:48.023694 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 03:49:20.023669726 +0000 UTC m=+83.540005065 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 03:48:48 crc kubenswrapper[4667]: E0131 03:48:48.023748 4667 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 03:48:48 crc kubenswrapper[4667]: E0131 03:48:48.023762 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 03:49:20.023741898 +0000 UTC m=+83.540077237 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:48:48 crc kubenswrapper[4667]: E0131 03:48:48.023780 4667 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 03:48:48 crc kubenswrapper[4667]: E0131 03:48:48.023795 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 03:49:20.023781639 +0000 UTC m=+83.540116978 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 03:48:48 crc kubenswrapper[4667]: E0131 03:48:48.023805 4667 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:48:48 crc kubenswrapper[4667]: E0131 03:48:48.023947 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 03:49:20.023919853 +0000 UTC m=+83.540255182 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.089697 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.089788 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.089820 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.089892 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.089921 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:48Z","lastTransitionTime":"2026-01-31T03:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.193666 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.193728 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.193746 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.193771 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.193790 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:48Z","lastTransitionTime":"2026-01-31T03:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.243635 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 05:29:45.140946494 +0000 UTC Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.281079 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:48 crc kubenswrapper[4667]: E0131 03:48:48.281219 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.281295 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.281296 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:48 crc kubenswrapper[4667]: E0131 03:48:48.281539 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:48 crc kubenswrapper[4667]: E0131 03:48:48.281693 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.298200 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.298241 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.298255 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.298273 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.298291 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:48Z","lastTransitionTime":"2026-01-31T03:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.402248 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.402324 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.402341 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.402369 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.402395 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:48Z","lastTransitionTime":"2026-01-31T03:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.505950 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.506004 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.506023 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.506145 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.506173 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:48Z","lastTransitionTime":"2026-01-31T03:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.609197 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.609251 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.609264 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.609285 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.609299 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:48Z","lastTransitionTime":"2026-01-31T03:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.712185 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.712240 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.712258 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.712284 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.712303 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:48Z","lastTransitionTime":"2026-01-31T03:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.815061 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.815110 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.815119 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.815136 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.815146 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:48Z","lastTransitionTime":"2026-01-31T03:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.918141 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.918202 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.918214 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.918235 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:48 crc kubenswrapper[4667]: I0131 03:48:48.918249 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:48Z","lastTransitionTime":"2026-01-31T03:48:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.020888 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.020942 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.020955 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.020977 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.020989 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:49Z","lastTransitionTime":"2026-01-31T03:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.123598 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.123681 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.123720 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.123740 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.123758 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:49Z","lastTransitionTime":"2026-01-31T03:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.226931 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.226987 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.227007 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.227031 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.227058 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:49Z","lastTransitionTime":"2026-01-31T03:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.243817 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 19:27:33.047207379 +0000 UTC Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.281537 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:48:49 crc kubenswrapper[4667]: E0131 03:48:49.281941 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.330188 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.330329 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.330351 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.330375 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.330392 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:49Z","lastTransitionTime":"2026-01-31T03:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.433761 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.433827 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.433885 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.433914 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.433935 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:49Z","lastTransitionTime":"2026-01-31T03:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.537106 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.537151 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.537163 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.537182 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.537194 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:49Z","lastTransitionTime":"2026-01-31T03:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.640122 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.640196 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.640215 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.640248 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.640273 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:49Z","lastTransitionTime":"2026-01-31T03:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.743594 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.743653 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.743677 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.743707 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.743726 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:49Z","lastTransitionTime":"2026-01-31T03:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.848090 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.848161 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.848183 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.848217 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.848238 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:49Z","lastTransitionTime":"2026-01-31T03:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.951279 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.951401 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.951427 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.951470 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:49 crc kubenswrapper[4667]: I0131 03:48:49.951495 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:49Z","lastTransitionTime":"2026-01-31T03:48:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.055140 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.055289 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.055310 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.055335 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.055403 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:50Z","lastTransitionTime":"2026-01-31T03:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.158411 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.158528 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.158549 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.158618 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.158640 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:50Z","lastTransitionTime":"2026-01-31T03:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.244818 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 12:40:30.896031677 +0000 UTC Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.262287 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.262322 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.262334 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.262377 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.262389 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:50Z","lastTransitionTime":"2026-01-31T03:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.280745 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.280788 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:50 crc kubenswrapper[4667]: E0131 03:48:50.280865 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:50 crc kubenswrapper[4667]: E0131 03:48:50.280958 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.281010 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:50 crc kubenswrapper[4667]: E0131 03:48:50.281258 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.364732 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.364797 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.364813 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.364859 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.364877 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:50Z","lastTransitionTime":"2026-01-31T03:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.467413 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.467511 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.467538 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.467576 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.467606 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:50Z","lastTransitionTime":"2026-01-31T03:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.571620 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.571688 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.571705 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.571733 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.571751 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:50Z","lastTransitionTime":"2026-01-31T03:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.675315 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.675391 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.675411 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.675438 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.675457 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:50Z","lastTransitionTime":"2026-01-31T03:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.778912 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.778994 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.779072 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.779103 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.779124 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:50Z","lastTransitionTime":"2026-01-31T03:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.882482 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.882552 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.882574 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.882600 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.882620 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:50Z","lastTransitionTime":"2026-01-31T03:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.985438 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.985522 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.985547 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.985580 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:50 crc kubenswrapper[4667]: I0131 03:48:50.985605 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:50Z","lastTransitionTime":"2026-01-31T03:48:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.088027 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.088110 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.088142 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.088172 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.088193 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:51Z","lastTransitionTime":"2026-01-31T03:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.191435 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.191503 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.191524 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.191552 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.191572 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:51Z","lastTransitionTime":"2026-01-31T03:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.245696 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 16:00:21.300961444 +0000 UTC Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.281569 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:48:51 crc kubenswrapper[4667]: E0131 03:48:51.282362 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.282948 4667 scope.go:117] "RemoveContainer" containerID="4dcfd6f322ee75abb3f8338832201628ffa44f71fee53f35735ce5072f79ddd0" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.295249 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.295906 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.296433 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.296927 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.298056 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:51Z","lastTransitionTime":"2026-01-31T03:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.403597 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.404236 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.404256 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.404283 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.404300 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:51Z","lastTransitionTime":"2026-01-31T03:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.507781 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.507902 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.507947 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.507973 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.507989 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:51Z","lastTransitionTime":"2026-01-31T03:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.611700 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.611776 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.611790 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.611812 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.611866 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:51Z","lastTransitionTime":"2026-01-31T03:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.632204 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jhj5n_3d685ba5-5ff5-4e74-8d02-99a233fc6c9b/ovnkube-controller/1.log" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.636804 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" event={"ID":"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b","Type":"ContainerStarted","Data":"bd8eb04c461cb43803302aaa3b6a93643b780598fe63f798a8834d1c762040d3"} Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.639347 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.652050 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd764" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b069c8d1-f785-4509-8ee6-7d44525bdc89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3014a6072d180863fd8be274b221dc47c9cd792188b8bc80621db1892ffdf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n8wnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd764\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:51Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.665648 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n5jv7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a24385e-62ca-4a82-8995-9f20115931c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp2s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp2s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n5jv7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:51Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.683282 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:51Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.695500 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b103bbd2-fb5d-4b2a-8b01-c32f699757df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9ff867bc008c324ad624ff71dcbf4f93b48146483c828ce43d1c10de40b0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://298f76d02f4ede118feca9fc2d4c9c073e2331174dcf673208ed96478b74232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9b7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:51Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.712045 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ns977" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57dcb541-6b8f-4730-9fd8-7ce27870e3a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559ff674832b9bb990309a535c9afb11a4f629b263495bc86311c24730b1a8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccvwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ns977\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:51Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.714812 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.714893 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.714911 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.714936 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.714954 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:51Z","lastTransitionTime":"2026-01-31T03:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.746230 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd8eb04c461cb43803302aaa3b6a93643b780598fe63f798a8834d1c762040d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dcfd6f322ee75abb3f8338832201628ffa44f71fee53f35735ce5072f79ddd0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"message\\\":\\\"enshift-cluster-version/cluster-version-operator for network=default are: map[]\\\\nF0131 03:48:33.432694 6023 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:33Z is after 2025-08-24T17:21:41Z]\\\\nI0131 03:48:33.432596 6023 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 03:48:33.432526 6023 model_client\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhj5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:51Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.766556 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4q9qz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3920ffb2-08f3-440b-bc6c-319a57bbe195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27cd88c349d4786018ab6ae21d45b22cdb95054c0b188bdce8cf97c53c09c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlwd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e3a943070029bd6e98682f2a4b3cfc0ab26dc2e9e7ab5179a60316923dbad33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlwd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4q9qz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:51Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.793434 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f495ddf-247c-4cac-979b-710342a770f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8127777e243fb5e93d9dd430fb28ccc91a340dfd6b4169ebac2f3167e5ea1660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ce78e24e1cbf1115918bbd93da300b4efa5434f21bf1a11669f702a894f64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b94e5ba5276aa39d01479c1eb697edafb939d0e62ec593eed1628e7735e95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e068f8011041fbb83af5bf15d9f856fb111b3fd48d3707507df895249b125646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586bfc35d3a6f331a069b76d004135156f1b13db4afcf14f1404cba6c4ec3627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:51Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.809958 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af345b03-7933-405e-9918-4dfa4559aba8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572c8933d715b77d472cb5f4c1e3c78d3a5d9dd6857a061f4db5292274041429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93540db06524b42380aa14ebbb64ece6e98cf8104ccc5930d58ae980e41d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fad2057c1b38b9a7628137d033413b768ea2ff18e1ece27c3db4f9279009ad9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46df3e9a1466ef303cf6f7c703ee28b993ea1ad08bdc870c4298be0ba0804d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:51Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.817566 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.817603 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.817639 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.817661 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.817677 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:51Z","lastTransitionTime":"2026-01-31T03:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.829769 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa0d7266-def3-467f-8ea8-8bb9d7364385\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f2362cecfaa0886df1bf67ce2fe0bc1f9586a785228c776daa0062302ae5f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a145cfd5492e6e2c3168e54747f3699b5148950bf88dc0431699e0dc6ff4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec15f3fe2b9b1c6827bc9093c19c1fe8cba5dc2aa0db3289e0a0b7029b8b09c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901a09c39328d4cd2c2abdccd1928b5f1554d953b1271349cbdf179f93eaa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901a09c39328d4cd2c2abdccd1928b5f1554d953b1271349cbdf179f93eaa4be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:51Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.857806 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e8e5fbf5b62418d8b08ccaafaf9f565b19d0d1ab8dc1ad4151af14790cf4aa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:51Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.880196 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7daf4c78db3e0b9f6629c1ae75a3dad90a19d8f830bc4e3db8b48c852b3485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:51Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.897640 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523e97dbbec93313d682bbe37cf3b8cf49936d91c8f60915bf1d8849bd53f4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1730e8905dbea5ca3056d2002abe78755bdca22f3fbd66a11bb6c000b2289945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:51Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.909983 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50870207-38dd-40d0-8a53-0eaa3af9d1fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1c7d3d73b43c4c32aba4ba0704c399d72ff80eff878183b5791be243b17bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:51Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.923294 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.923372 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.923387 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.923417 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.923444 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:51Z","lastTransitionTime":"2026-01-31T03:48:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.930203 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97e83040-6e53-4c9c-afda-c21bee92d1b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5d85015202ca538e52ac5ea41e417dd6c76f81b7191007983ec9bf7fde68eb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292wp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:51Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.944693 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10ccda3-d9b2-4d01-897a-8498aee530b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 03:48:15.785649 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 03:48:15.786510 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:48:15.790183 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1119535395/tls.crt::/tmp/serving-cert-1119535395/tls.key\\\\\\\"\\\\nI0131 03:48:16.086916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 03:48:16.089052 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 03:48:16.089068 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 03:48:16.089086 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 03:48:16.089091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 03:48:16.097787 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 03:48:16.097804 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 03:48:16.097818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 03:48:16.097822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 03:48:16.097825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 03:48:16.098030 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 03:48:16.100791 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:51Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.956272 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:51Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:51 crc kubenswrapper[4667]: I0131 03:48:51.968408 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:51Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.026815 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.026883 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.026894 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.026914 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.026926 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:52Z","lastTransitionTime":"2026-01-31T03:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.129529 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.129568 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.129579 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.129593 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.129604 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:52Z","lastTransitionTime":"2026-01-31T03:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.233040 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.233087 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.233104 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.233123 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.233142 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:52Z","lastTransitionTime":"2026-01-31T03:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.246196 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 20:06:04.961497292 +0000 UTC Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.281110 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.281146 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.281170 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:52 crc kubenswrapper[4667]: E0131 03:48:52.281234 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:52 crc kubenswrapper[4667]: E0131 03:48:52.281364 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:52 crc kubenswrapper[4667]: E0131 03:48:52.281445 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.335963 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.335994 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.336003 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.336025 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.336034 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:52Z","lastTransitionTime":"2026-01-31T03:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.439154 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.439230 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.439253 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.439285 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.439307 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:52Z","lastTransitionTime":"2026-01-31T03:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.542466 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.542509 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.542518 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.542544 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.542554 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:52Z","lastTransitionTime":"2026-01-31T03:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.683329 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.683412 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.683440 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.683474 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.683499 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:52Z","lastTransitionTime":"2026-01-31T03:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.685351 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jhj5n_3d685ba5-5ff5-4e74-8d02-99a233fc6c9b/ovnkube-controller/2.log" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.686585 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jhj5n_3d685ba5-5ff5-4e74-8d02-99a233fc6c9b/ovnkube-controller/1.log" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.691013 4667 generic.go:334] "Generic (PLEG): container finished" podID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerID="bd8eb04c461cb43803302aaa3b6a93643b780598fe63f798a8834d1c762040d3" exitCode=1 Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.691072 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" event={"ID":"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b","Type":"ContainerDied","Data":"bd8eb04c461cb43803302aaa3b6a93643b780598fe63f798a8834d1c762040d3"} Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.691131 4667 scope.go:117] "RemoveContainer" containerID="4dcfd6f322ee75abb3f8338832201628ffa44f71fee53f35735ce5072f79ddd0" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.692324 4667 scope.go:117] "RemoveContainer" containerID="bd8eb04c461cb43803302aaa3b6a93643b780598fe63f798a8834d1c762040d3" Jan 31 03:48:52 crc kubenswrapper[4667]: E0131 03:48:52.692690 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jhj5n_openshift-ovn-kubernetes(3d685ba5-5ff5-4e74-8d02-99a233fc6c9b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.718639 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:52Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.739770 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7daf4c78db3e0b9f6629c1ae75a3dad90a19d8f830bc4e3db8b48c852b3485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:52Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.762209 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523e97dbbec93313d682bbe37cf3b8cf49936d91c8f60915bf1d8849bd53f4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1730e8905dbea5ca3056d2002abe78755bdca22f3fbd66a11bb6c000b2289945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:52Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.787486 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.787553 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.787572 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.787598 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.787616 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:52Z","lastTransitionTime":"2026-01-31T03:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.792967 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50870207-38dd-40d0-8a53-0eaa3af9d1fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1c7d3d73b43c4c32aba4ba0704c399d72ff80eff878183b5791be243b17bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:52Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.811775 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97e83040-6e53-4c9c-afda-c21bee92d1b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5d85015202ca538e52ac5ea41e417dd6c76f81b7191007983ec9bf7fde68eb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292wp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:52Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.836664 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10ccda3-d9b2-4d01-897a-8498aee530b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 03:48:15.785649 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 03:48:15.786510 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:48:15.790183 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1119535395/tls.crt::/tmp/serving-cert-1119535395/tls.key\\\\\\\"\\\\nI0131 03:48:16.086916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 03:48:16.089052 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 03:48:16.089068 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 03:48:16.089086 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 03:48:16.089091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 03:48:16.097787 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 03:48:16.097804 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 03:48:16.097818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 03:48:16.097822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 03:48:16.097825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 03:48:16.098030 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 03:48:16.100791 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:52Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.864643 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:52Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.884662 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ns977" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57dcb541-6b8f-4730-9fd8-7ce27870e3a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559ff674832b9bb990309a535c9afb11a4f629b263495bc86311c24730b1a8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccvwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ns977\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:52Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.893410 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.893470 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.893489 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.893515 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.893535 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:52Z","lastTransitionTime":"2026-01-31T03:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.912218 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd764" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b069c8d1-f785-4509-8ee6-7d44525bdc89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3014a6072d180863fd8be274b221dc47c9cd792188b8bc80621db1892ffdf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n8wnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd764\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:52Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.932493 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n5jv7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a24385e-62ca-4a82-8995-9f20115931c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp2s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp2s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n5jv7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:52Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.955822 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:52Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.978360 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b103bbd2-fb5d-4b2a-8b01-c32f699757df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9ff867bc008c324ad624ff71dcbf4f93b48146483c828ce43d1c10de40b0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://298f76d02f4ede118feca9fc2d4c9c073e2331174dcf673208ed96478b74232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9b7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:52Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.992135 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.992199 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.992217 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.992242 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:52 crc kubenswrapper[4667]: I0131 03:48:52.992260 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:52Z","lastTransitionTime":"2026-01-31T03:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.000531 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa0d7266-def3-467f-8ea8-8bb9d7364385\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f2362cecfaa0886df1bf67ce2fe0bc1f9586a785228c776daa0062302ae5f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a145cfd5492e6e2c3168e54747f3699b5148950bf88dc0431699e0dc6ff4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec15f3fe2b9b1c6827bc9093c19c1fe8cba5dc2aa0db3289e0a0b7029b8b09c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901a09c39328d4cd2c2abdccd1928b5f1554d953b1271349cbdf179f93eaa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901a09c39328d4cd2c2abdccd1928b5f1554d953b1271349cbdf179f93eaa4be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:52Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:53 crc kubenswrapper[4667]: E0131 03:48:53.013704 4667 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b790e77-6566-44ce-a51f-ed9234cccb89\\\",\\\"systemUUID\\\":\\\"53d28e89-fb25-47fd-9db4-43074284604e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:53Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.018240 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.018314 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.018334 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.018363 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.018384 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:53Z","lastTransitionTime":"2026-01-31T03:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:53 crc kubenswrapper[4667]: E0131 03:48:53.036536 4667 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b790e77-6566-44ce-a51f-ed9234cccb89\\\",\\\"systemUUID\\\":\\\"53d28e89-fb25-47fd-9db4-43074284604e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:53Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.040792 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd8eb04c461cb43803302aaa3b6a93643b780598fe63f798a8834d1c762040d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dcfd6f322ee75abb3f8338832201628ffa44f71fee53f35735ce5072f79ddd0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"message\\\":\\\"enshift-cluster-version/cluster-version-operator for network=default are: map[]\\\\nF0131 03:48:33.432694 6023 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:33Z is after 2025-08-24T17:21:41Z]\\\\nI0131 03:48:33.432596 6023 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 03:48:33.432526 6023 model_client\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd8eb04c461cb43803302aaa3b6a93643b780598fe63f798a8834d1c762040d3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:48:52Z\\\",\\\"message\\\":\\\"ift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:mcc-proxy-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0073e06ab \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9001,TargetPort:{0 9001 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-config-controller,},ClusterIP:10.217.5.16,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.16],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0131 03:48:52.267064 6240 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhj5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:53Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.044055 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.044152 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.044179 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.044214 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.044243 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:53Z","lastTransitionTime":"2026-01-31T03:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.073399 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4q9qz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3920ffb2-08f3-440b-bc6c-319a57bbe195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27cd88c349d4786018ab6ae21d45b22cdb95054c0b188bdce8cf97c53c09c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlwd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e3a943070029bd6e98682f2a4b3cfc0ab26dc2e9e7ab5179a60316923dbad33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlwd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4q9qz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:53Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:53 crc kubenswrapper[4667]: E0131 03:48:53.074977 4667 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b790e77-6566-44ce-a51f-ed9234cccb89\\\",\\\"systemUUID\\\":\\\"53d28e89-fb25-47fd-9db4-43074284604e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:53Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.091082 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.091173 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.091192 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.091226 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.091240 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:53Z","lastTransitionTime":"2026-01-31T03:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:53 crc kubenswrapper[4667]: E0131 03:48:53.110715 4667 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b790e77-6566-44ce-a51f-ed9234cccb89\\\",\\\"systemUUID\\\":\\\"53d28e89-fb25-47fd-9db4-43074284604e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:53Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.115868 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.115949 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.115975 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.116010 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.116033 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:53Z","lastTransitionTime":"2026-01-31T03:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.121640 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f495ddf-247c-4cac-979b-710342a770f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8127777e243fb5e93d9dd430fb28ccc91a340dfd6b4169ebac2f3167e5ea1660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ce78e24e1cbf1115918bbd93da300b4efa5434f21bf1a11669f702a894f64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b94e5ba5276aa39d01479c1eb697edafb939d0e62ec593eed1628e7735e95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e068f8011041fbb83af5bf15d9f856fb111b3fd48d3707507df895249b125646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586bfc35d3a6f331a069b76d004135156f1b13db4afcf14f1404cba6c4ec3627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:53Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:53 crc kubenswrapper[4667]: E0131 03:48:53.136378 4667 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:48:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b790e77-6566-44ce-a51f-ed9234cccb89\\\",\\\"systemUUID\\\":\\\"53d28e89-fb25-47fd-9db4-43074284604e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:53Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:53 crc kubenswrapper[4667]: E0131 03:48:53.136970 4667 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.139202 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.139240 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.139267 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.139286 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.139299 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:53Z","lastTransitionTime":"2026-01-31T03:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.140857 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af345b03-7933-405e-9918-4dfa4559aba8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572c8933d715b77d472cb5f4c1e3c78d3a5d9dd6857a061f4db5292274041429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93540db06524b42380aa14ebbb64ece6e98cf8104ccc5930d58ae980e41d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fad2057c1b38b9a7628137d033413b768ea2ff18e1ece27c3db4f9279009ad9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46df3e9a1466ef303cf6f7c703ee28b993ea1ad08bdc870c4298be0ba0804d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:53Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.159333 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e8e5fbf5b62418d8b08ccaafaf9f565b19d0d1ab8dc1ad4151af14790cf4aa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:53Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.242950 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.243332 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.243463 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.243627 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.243756 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:53Z","lastTransitionTime":"2026-01-31T03:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.247233 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 17:24:49.214985425 +0000 UTC Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.281831 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:48:53 crc kubenswrapper[4667]: E0131 03:48:53.282112 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.347918 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.348240 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.348334 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.348444 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.348503 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:53Z","lastTransitionTime":"2026-01-31T03:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.452114 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.452182 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.452201 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.452231 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.452254 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:53Z","lastTransitionTime":"2026-01-31T03:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.489200 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a24385e-62ca-4a82-8995-9f20115931c4-metrics-certs\") pod \"network-metrics-daemon-n5jv7\" (UID: \"4a24385e-62ca-4a82-8995-9f20115931c4\") " pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:48:53 crc kubenswrapper[4667]: E0131 03:48:53.489472 4667 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 03:48:53 crc kubenswrapper[4667]: E0131 03:48:53.489599 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a24385e-62ca-4a82-8995-9f20115931c4-metrics-certs podName:4a24385e-62ca-4a82-8995-9f20115931c4 nodeName:}" failed. No retries permitted until 2026-01-31 03:49:09.489572333 +0000 UTC m=+73.005907672 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4a24385e-62ca-4a82-8995-9f20115931c4-metrics-certs") pod "network-metrics-daemon-n5jv7" (UID: "4a24385e-62ca-4a82-8995-9f20115931c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.556058 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.556111 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.556129 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.556156 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.556178 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:53Z","lastTransitionTime":"2026-01-31T03:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.660958 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.661045 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.661062 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.661090 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.661113 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:53Z","lastTransitionTime":"2026-01-31T03:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.697742 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jhj5n_3d685ba5-5ff5-4e74-8d02-99a233fc6c9b/ovnkube-controller/2.log" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.703211 4667 scope.go:117] "RemoveContainer" containerID="bd8eb04c461cb43803302aaa3b6a93643b780598fe63f798a8834d1c762040d3" Jan 31 03:48:53 crc kubenswrapper[4667]: E0131 03:48:53.703496 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jhj5n_openshift-ovn-kubernetes(3d685ba5-5ff5-4e74-8d02-99a233fc6c9b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.724686 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7daf4c78db3e0b9f6629c1ae75a3dad90a19d8f830bc4e3db8b48c852b3485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:53Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.747309 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523e97dbbec93313d682bbe37cf3b8cf49936d91c8f60915bf1d8849bd53f4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1730e8905dbea5ca3056d2002abe78755bdca22f3fbd66a11bb6c000b2289945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:53Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.765353 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.765425 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.765453 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.765482 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.765504 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:53Z","lastTransitionTime":"2026-01-31T03:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.770625 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50870207-38dd-40d0-8a53-0eaa3af9d1fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1c7d3d73b43c4c32aba4ba0704c399d72ff80eff878183b5791be243b17bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:53Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.785721 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97e83040-6e53-4c9c-afda-c21bee92d1b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5d85015202ca538e52ac5ea41e417dd6c76f81b7191007983ec9bf7fde68eb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292wp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:53Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.808920 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10ccda3-d9b2-4d01-897a-8498aee530b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 03:48:15.785649 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 03:48:15.786510 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:48:15.790183 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1119535395/tls.crt::/tmp/serving-cert-1119535395/tls.key\\\\\\\"\\\\nI0131 03:48:16.086916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 03:48:16.089052 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 03:48:16.089068 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 03:48:16.089086 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 03:48:16.089091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 03:48:16.097787 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 03:48:16.097804 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 03:48:16.097818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 03:48:16.097822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 03:48:16.097825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 03:48:16.098030 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 03:48:16.100791 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:53Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.824485 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:53Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.840353 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:53Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.862620 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd764" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b069c8d1-f785-4509-8ee6-7d44525bdc89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3014a6072d180863fd8be274b221dc47c9cd792188b8bc80621db1892ffdf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n8wnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd764\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:53Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.869125 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.869194 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.869217 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.869252 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.869276 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:53Z","lastTransitionTime":"2026-01-31T03:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.877081 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n5jv7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a24385e-62ca-4a82-8995-9f20115931c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp2s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp2s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n5jv7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:53Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.894899 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:53Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.916463 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b103bbd2-fb5d-4b2a-8b01-c32f699757df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9ff867bc008c324ad624ff71dcbf4f93b48146483c828ce43d1c10de40b0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://298f76d02f4ede118feca9fc2d4c9c073e2331174dcf673208ed96478b74232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9b7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:53Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.931748 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ns977" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57dcb541-6b8f-4730-9fd8-7ce27870e3a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559ff674832b9bb990309a535c9afb11a4f629b263495bc86311c24730b1a8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccvwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ns977\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:53Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.955479 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd8eb04c461cb43803302aaa3b6a93643b780598fe63f798a8834d1c762040d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd8eb04c461cb43803302aaa3b6a93643b780598fe63f798a8834d1c762040d3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:48:52Z\\\",\\\"message\\\":\\\"ift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:mcc-proxy-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0073e06ab \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9001,TargetPort:{0 9001 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-config-controller,},ClusterIP:10.217.5.16,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.16],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0131 03:48:52.267064 6240 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jhj5n_openshift-ovn-kubernetes(3d685ba5-5ff5-4e74-8d02-99a233fc6c9b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhj5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:53Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.969528 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4q9qz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3920ffb2-08f3-440b-bc6c-319a57bbe195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27cd88c349d4786018ab6ae21d45b22cdb95054c0b188bdce8cf97c53c09c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlwd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e3a943070029bd6e98682f2a4b3cfc0ab26dc2e9e7ab5179a60316923dbad33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlwd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4q9qz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:53Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.971599 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.971676 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.971691 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.971714 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.971727 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:53Z","lastTransitionTime":"2026-01-31T03:48:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:53 crc kubenswrapper[4667]: I0131 03:48:53.993055 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f495ddf-247c-4cac-979b-710342a770f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8127777e243fb5e93d9dd430fb28ccc91a340dfd6b4169ebac2f3167e5ea1660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ce78e24e1cbf1115918bbd93da300b4efa5434f21bf1a11669f702a894f64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b94e5ba5276aa39d01479c1eb697edafb939d0e62ec593eed1628e7735e95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e068f8011041fbb83af5bf15d9f856fb111b3fd48d3707507df895249b125646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586bfc35d3a6f331a069b76d004135156f1b13db4afcf14f1404cba6c4ec3627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:53Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.014400 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af345b03-7933-405e-9918-4dfa4559aba8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572c8933d715b77d472cb5f4c1e3c78d3a5d9dd6857a061f4db5292274041429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93540db06524b42380aa14ebbb64ece6e98cf8104ccc5930d58ae980e41d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fad2057c1b38b9a7628137d033413b768ea2ff18e1ece27c3db4f9279009ad9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46df3e9a1466ef303cf6f7c703ee28b993ea1ad08bdc870c4298be0ba0804d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:54Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.034418 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa0d7266-def3-467f-8ea8-8bb9d7364385\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f2362cecfaa0886df1bf67ce2fe0bc1f9586a785228c776daa0062302ae5f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a145cfd5492e6e2c3168e54747f3699b5148950bf88dc0431699e0dc6ff4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec15f3fe2b9b1c6827bc9093c19c1fe8cba5dc2aa0db3289e0a0b7029b8b09c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901a09c39328d4cd2c2abdccd1928b5f1554d953b1271349cbdf179f93eaa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901a09c39328d4cd2c2abdccd1928b5f1554d953b1271349cbdf179f93eaa4be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:54Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.052107 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e8e5fbf5b62418d8b08ccaafaf9f565b19d0d1ab8dc1ad4151af14790cf4aa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:54Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.075365 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.075419 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.075432 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.075452 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.075465 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:54Z","lastTransitionTime":"2026-01-31T03:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.178765 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.178871 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.178901 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.178925 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.178940 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:54Z","lastTransitionTime":"2026-01-31T03:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.248050 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 16:58:03.424973773 +0000 UTC Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.282641 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:54 crc kubenswrapper[4667]: E0131 03:48:54.282777 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.283041 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:54 crc kubenswrapper[4667]: E0131 03:48:54.283271 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.283101 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:54 crc kubenswrapper[4667]: E0131 03:48:54.283517 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.287273 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.287296 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.287305 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.287318 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.287328 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:54Z","lastTransitionTime":"2026-01-31T03:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.390349 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.390411 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.390430 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.390456 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.390473 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:54Z","lastTransitionTime":"2026-01-31T03:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.493622 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.493709 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.493722 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.493742 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.493758 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:54Z","lastTransitionTime":"2026-01-31T03:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.596603 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.596660 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.596672 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.596693 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.596707 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:54Z","lastTransitionTime":"2026-01-31T03:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.699523 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.699829 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.699864 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.699882 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.699896 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:54Z","lastTransitionTime":"2026-01-31T03:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.803288 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.803354 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.803373 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.803398 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.803410 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:54Z","lastTransitionTime":"2026-01-31T03:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.907135 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.907237 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.907257 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.907286 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:54 crc kubenswrapper[4667]: I0131 03:48:54.907307 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:54Z","lastTransitionTime":"2026-01-31T03:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.010600 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.010682 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.010701 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.010735 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.010756 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:55Z","lastTransitionTime":"2026-01-31T03:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.114417 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.114465 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.114478 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.114496 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.114510 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:55Z","lastTransitionTime":"2026-01-31T03:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.217977 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.218026 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.218038 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.218059 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.218073 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:55Z","lastTransitionTime":"2026-01-31T03:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.248900 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 13:19:39.1052975 +0000 UTC Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.281486 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:48:55 crc kubenswrapper[4667]: E0131 03:48:55.282156 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.320999 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.321049 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.321058 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.321076 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.321089 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:55Z","lastTransitionTime":"2026-01-31T03:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.424272 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.424350 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.424374 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.424409 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.424432 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:55Z","lastTransitionTime":"2026-01-31T03:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.527546 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.527590 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.527603 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.527619 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.527628 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:55Z","lastTransitionTime":"2026-01-31T03:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.630004 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.630488 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.630671 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.630882 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.631043 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:55Z","lastTransitionTime":"2026-01-31T03:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.734195 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.734234 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.734244 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.734261 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.734272 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:55Z","lastTransitionTime":"2026-01-31T03:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.837703 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.837761 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.837777 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.837804 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.837823 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:55Z","lastTransitionTime":"2026-01-31T03:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.940707 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.940747 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.940760 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.940778 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:55 crc kubenswrapper[4667]: I0131 03:48:55.940791 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:55Z","lastTransitionTime":"2026-01-31T03:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.043983 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.044043 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.044057 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.044087 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.044102 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:56Z","lastTransitionTime":"2026-01-31T03:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.147057 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.147428 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.147615 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.147775 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.147962 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:56Z","lastTransitionTime":"2026-01-31T03:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.249318 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 23:19:18.000516811 +0000 UTC Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.252276 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.252333 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.252346 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.252366 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.252382 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:56Z","lastTransitionTime":"2026-01-31T03:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.281642 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:56 crc kubenswrapper[4667]: E0131 03:48:56.281931 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.281653 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:56 crc kubenswrapper[4667]: E0131 03:48:56.282078 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.281648 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:56 crc kubenswrapper[4667]: E0131 03:48:56.282190 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.355098 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.355456 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.355557 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.355645 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.355737 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:56Z","lastTransitionTime":"2026-01-31T03:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.458501 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.458827 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.458951 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.459080 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.459167 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:56Z","lastTransitionTime":"2026-01-31T03:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.563445 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.563525 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.563546 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.563581 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.563602 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:56Z","lastTransitionTime":"2026-01-31T03:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.667054 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.667518 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.667607 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.667697 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.667788 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:56Z","lastTransitionTime":"2026-01-31T03:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.770384 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.770471 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.770495 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.770528 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.770554 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:56Z","lastTransitionTime":"2026-01-31T03:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.874919 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.874985 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.875006 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.875041 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.875069 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:56Z","lastTransitionTime":"2026-01-31T03:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.978026 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.978091 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.978108 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.978134 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:56 crc kubenswrapper[4667]: I0131 03:48:56.978154 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:56Z","lastTransitionTime":"2026-01-31T03:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.082017 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.082098 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.082117 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.082143 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.082161 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:57Z","lastTransitionTime":"2026-01-31T03:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.186412 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.186489 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.186508 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.186538 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.186557 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:57Z","lastTransitionTime":"2026-01-31T03:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.250050 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 21:37:37.3854803 +0000 UTC Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.281156 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:48:57 crc kubenswrapper[4667]: E0131 03:48:57.281397 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.290554 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.290616 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.290635 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.290663 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.290684 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:57Z","lastTransitionTime":"2026-01-31T03:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.298945 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e8e5fbf5b62418d8b08ccaafaf9f565b19d0d1ab8dc1ad4151af14790cf4aa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:57Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.313141 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97e83040-6e53-4c9c-afda-c21bee92d1b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5d85015202ca538e52ac5ea41e417dd6c76f81b7191007983ec9bf7fde68eb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292wp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:57Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.329224 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10ccda3-d9b2-4d01-897a-8498aee530b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 03:48:15.785649 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 03:48:15.786510 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:48:15.790183 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1119535395/tls.crt::/tmp/serving-cert-1119535395/tls.key\\\\\\\"\\\\nI0131 03:48:16.086916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 03:48:16.089052 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 03:48:16.089068 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 03:48:16.089086 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 03:48:16.089091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 03:48:16.097787 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 03:48:16.097804 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 03:48:16.097818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 03:48:16.097822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 03:48:16.097825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 03:48:16.098030 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 03:48:16.100791 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:57Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.343759 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:57Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.357113 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:57Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.371761 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7daf4c78db3e0b9f6629c1ae75a3dad90a19d8f830bc4e3db8b48c852b3485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:57Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.392751 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523e97dbbec93313d682bbe37cf3b8cf49936d91c8f60915bf1d8849bd53f4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1730e8905dbea5ca3056d2002abe78755bdca22f3fbd66a11bb6c000b2289945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:57Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.394538 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.394690 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.394782 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.394926 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.395818 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:57Z","lastTransitionTime":"2026-01-31T03:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.499813 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.499946 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.499970 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.499999 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.500018 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:57Z","lastTransitionTime":"2026-01-31T03:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.604042 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.604121 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.604140 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.604168 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.604187 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:57Z","lastTransitionTime":"2026-01-31T03:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.707689 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.708397 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.708507 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.708604 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.708709 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:57Z","lastTransitionTime":"2026-01-31T03:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.812258 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.812347 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.812389 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.812498 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.812528 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:57Z","lastTransitionTime":"2026-01-31T03:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.881655 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50870207-38dd-40d0-8a53-0eaa3af9d1fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1c7d3d73b43c4c32aba4ba0704c399d72ff80eff878183b5791be243b17bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:57Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.907145 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:57Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.917315 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.917377 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.917391 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.917418 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.917436 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:57Z","lastTransitionTime":"2026-01-31T03:48:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.930375 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b103bbd2-fb5d-4b2a-8b01-c32f699757df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9ff867bc008c324ad624ff71dcbf4f93b48146483c828ce43d1c10de40b0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://298f76d02f4ede118feca9fc2d4c9c073e2331174dcf673208ed96478b74232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9b7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:57Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.945681 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ns977" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57dcb541-6b8f-4730-9fd8-7ce27870e3a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559ff674832b9bb990309a535c9afb11a4f629b263495bc86311c24730b1a8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccvwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ns977\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:57Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.969707 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd764" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b069c8d1-f785-4509-8ee6-7d44525bdc89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3014a6072d180863fd8be274b221dc47c9cd792188b8bc80621db1892ffdf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n8wnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd764\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:57Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:57 crc kubenswrapper[4667]: I0131 03:48:57.988654 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n5jv7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a24385e-62ca-4a82-8995-9f20115931c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp2s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp2s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n5jv7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:57Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.021540 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.021629 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.021644 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.021665 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.021711 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:58Z","lastTransitionTime":"2026-01-31T03:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.027489 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f495ddf-247c-4cac-979b-710342a770f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8127777e243fb5e93d9dd430fb28ccc91a340dfd6b4169ebac2f3167e5ea1660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ce78e24e1cbf1115918bbd93da300b4efa5434f21bf1a11669f702a894f64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b94e5ba5276aa39d01479c1eb697edafb939d0e62ec593eed1628e7735e95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e068f8011041fbb83af5bf15d9f856fb111b3fd48d3707507df895249b125646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586bfc35d3a6f331a069b76d004135156f1b13db4afcf14f1404cba6c4ec3627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:58Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.045981 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af345b03-7933-405e-9918-4dfa4559aba8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572c8933d715b77d472cb5f4c1e3c78d3a5d9dd6857a061f4db5292274041429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93540db06524b42380aa14ebbb64ece6e98cf8104ccc5930d58ae980e41d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fad2057c1b38b9a7628137d033413b768ea2ff18e1ece27c3db4f9279009ad9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46df3e9a1466ef303cf6f7c703ee28b993ea1ad08bdc870c4298be0ba0804d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:58Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.069060 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa0d7266-def3-467f-8ea8-8bb9d7364385\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f2362cecfaa0886df1bf67ce2fe0bc1f9586a785228c776daa0062302ae5f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a145cfd5492e6e2c3168e54747f3699b5148950bf88dc0431699e0dc6ff4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec15f3fe2b9b1c6827bc9093c19c1fe8cba5dc2aa0db3289e0a0b7029b8b09c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901a09c39328d4cd2c2abdccd1928b5f1554d953b1271349cbdf179f93eaa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901a09c39328d4cd2c2abdccd1928b5f1554d953b1271349cbdf179f93eaa4be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:58Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.098383 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd8eb04c461cb43803302aaa3b6a93643b780598fe63f798a8834d1c762040d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd8eb04c461cb43803302aaa3b6a93643b780598fe63f798a8834d1c762040d3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:48:52Z\\\",\\\"message\\\":\\\"ift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:mcc-proxy-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0073e06ab \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9001,TargetPort:{0 9001 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-config-controller,},ClusterIP:10.217.5.16,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.16],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0131 03:48:52.267064 6240 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jhj5n_openshift-ovn-kubernetes(3d685ba5-5ff5-4e74-8d02-99a233fc6c9b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhj5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:58Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.121522 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4q9qz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3920ffb2-08f3-440b-bc6c-319a57bbe195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27cd88c349d4786018ab6ae21d45b22cdb95054c0b188bdce8cf97c53c09c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlwd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e3a943070029bd6e98682f2a4b3cfc0ab26dc2e9e7ab5179a60316923dbad33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlwd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4q9qz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:48:58Z is after 2025-08-24T17:21:41Z" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.125993 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.126045 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.126058 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.126081 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.126096 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:58Z","lastTransitionTime":"2026-01-31T03:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.229729 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.229781 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.229794 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.229815 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.229830 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:58Z","lastTransitionTime":"2026-01-31T03:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.251163 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 04:07:29.003223182 +0000 UTC Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.281748 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:48:58 crc kubenswrapper[4667]: E0131 03:48:58.281974 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.282249 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:48:58 crc kubenswrapper[4667]: E0131 03:48:58.282326 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.282489 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:48:58 crc kubenswrapper[4667]: E0131 03:48:58.282547 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.334004 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.334050 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.334059 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.334075 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.334088 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:58Z","lastTransitionTime":"2026-01-31T03:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.436305 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.436375 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.436394 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.436422 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.436442 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:58Z","lastTransitionTime":"2026-01-31T03:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.539582 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.539624 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.539635 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.539654 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.539667 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:58Z","lastTransitionTime":"2026-01-31T03:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.641633 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.641683 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.641695 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.641713 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.641726 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:58Z","lastTransitionTime":"2026-01-31T03:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.743960 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.743993 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.744003 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.744019 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.744029 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:58Z","lastTransitionTime":"2026-01-31T03:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.847379 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.847508 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.847581 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.847620 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.847694 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:58Z","lastTransitionTime":"2026-01-31T03:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.951186 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.951292 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.951304 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.951322 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:58 crc kubenswrapper[4667]: I0131 03:48:58.951332 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:58Z","lastTransitionTime":"2026-01-31T03:48:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.054889 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.054932 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.054944 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.054963 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.054976 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:59Z","lastTransitionTime":"2026-01-31T03:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.157760 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.157824 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.157863 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.157891 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.157908 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:59Z","lastTransitionTime":"2026-01-31T03:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.252031 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 10:35:44.659446405 +0000 UTC Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.261464 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.261519 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.261531 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.261553 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.261569 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:59Z","lastTransitionTime":"2026-01-31T03:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.281316 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:48:59 crc kubenswrapper[4667]: E0131 03:48:59.281617 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.364464 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.364532 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.364551 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.364605 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.364625 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:59Z","lastTransitionTime":"2026-01-31T03:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.467625 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.467668 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.467680 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.467701 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.467715 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:59Z","lastTransitionTime":"2026-01-31T03:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.570068 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.570103 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.570113 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.570128 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.570139 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:59Z","lastTransitionTime":"2026-01-31T03:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.673059 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.673118 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.673135 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.673159 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.673180 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:59Z","lastTransitionTime":"2026-01-31T03:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.775956 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.776012 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.776023 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.776043 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.776058 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:59Z","lastTransitionTime":"2026-01-31T03:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.878861 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.878897 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.878906 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.878922 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.878933 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:59Z","lastTransitionTime":"2026-01-31T03:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.981424 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.981509 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.981537 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.981566 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:48:59 crc kubenswrapper[4667]: I0131 03:48:59.981585 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:48:59Z","lastTransitionTime":"2026-01-31T03:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.084524 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.084668 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.084688 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.084758 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.084781 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:00Z","lastTransitionTime":"2026-01-31T03:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.187561 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.187671 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.188348 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.188603 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.188650 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:00Z","lastTransitionTime":"2026-01-31T03:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.253186 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 01:48:48.445810945 +0000 UTC Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.280624 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.280698 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:49:00 crc kubenswrapper[4667]: E0131 03:49:00.280814 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.280970 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:49:00 crc kubenswrapper[4667]: E0131 03:49:00.281127 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:49:00 crc kubenswrapper[4667]: E0131 03:49:00.281225 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.291761 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.291890 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.291921 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.291954 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.291975 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:00Z","lastTransitionTime":"2026-01-31T03:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.395091 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.395161 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.395184 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.395214 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.395234 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:00Z","lastTransitionTime":"2026-01-31T03:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.498417 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.498470 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.498487 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.498507 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.498521 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:00Z","lastTransitionTime":"2026-01-31T03:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.602760 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.602794 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.602802 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.602818 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.602827 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:00Z","lastTransitionTime":"2026-01-31T03:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.705575 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.705616 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.705634 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.705653 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.705668 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:00Z","lastTransitionTime":"2026-01-31T03:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.810082 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.810128 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.810136 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.810153 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.810163 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:00Z","lastTransitionTime":"2026-01-31T03:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.912980 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.913015 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.913025 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.913039 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:00 crc kubenswrapper[4667]: I0131 03:49:00.913049 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:00Z","lastTransitionTime":"2026-01-31T03:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.016216 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.016265 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.016275 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.016291 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.016302 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:01Z","lastTransitionTime":"2026-01-31T03:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.119217 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.119316 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.119346 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.119385 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.119414 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:01Z","lastTransitionTime":"2026-01-31T03:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.222995 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.223041 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.223052 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.223066 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.223076 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:01Z","lastTransitionTime":"2026-01-31T03:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.253316 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 18:51:50.553256505 +0000 UTC Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.280925 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:49:01 crc kubenswrapper[4667]: E0131 03:49:01.281111 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.326665 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.326717 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.326729 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.326746 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.326757 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:01Z","lastTransitionTime":"2026-01-31T03:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.430784 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.430832 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.430905 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.430923 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.430934 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:01Z","lastTransitionTime":"2026-01-31T03:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.534198 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.534268 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.534281 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.534316 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.534326 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:01Z","lastTransitionTime":"2026-01-31T03:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.637254 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.637294 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.637303 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.637319 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.637329 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:01Z","lastTransitionTime":"2026-01-31T03:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.739734 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.739769 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.739777 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.739798 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.739808 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:01Z","lastTransitionTime":"2026-01-31T03:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.843381 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.843445 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.843460 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.843480 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.843495 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:01Z","lastTransitionTime":"2026-01-31T03:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.946489 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.946540 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.946552 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.946579 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:01 crc kubenswrapper[4667]: I0131 03:49:01.946594 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:01Z","lastTransitionTime":"2026-01-31T03:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.049299 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.049355 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.049368 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.049392 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.049406 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:02Z","lastTransitionTime":"2026-01-31T03:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.152299 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.152345 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.152354 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.152370 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.152381 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:02Z","lastTransitionTime":"2026-01-31T03:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.253620 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 15:39:26.58947525 +0000 UTC Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.255550 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.255589 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.255597 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.255616 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.255627 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:02Z","lastTransitionTime":"2026-01-31T03:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.280884 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.280914 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.280887 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:49:02 crc kubenswrapper[4667]: E0131 03:49:02.281033 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:49:02 crc kubenswrapper[4667]: E0131 03:49:02.281150 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:49:02 crc kubenswrapper[4667]: E0131 03:49:02.281368 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.294403 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.363076 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.363133 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.363143 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.363161 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.363173 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:02Z","lastTransitionTime":"2026-01-31T03:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.466309 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.466344 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.466352 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.466369 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.466379 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:02Z","lastTransitionTime":"2026-01-31T03:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.568788 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.568817 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.568834 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.568870 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.568879 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:02Z","lastTransitionTime":"2026-01-31T03:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.671492 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.671523 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.671530 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.671544 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.671553 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:02Z","lastTransitionTime":"2026-01-31T03:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.775113 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.775166 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.775177 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.775197 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.775211 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:02Z","lastTransitionTime":"2026-01-31T03:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.877898 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.877950 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.877961 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.877980 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.877994 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:02Z","lastTransitionTime":"2026-01-31T03:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.980956 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.981008 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.981023 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.981044 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:02 crc kubenswrapper[4667]: I0131 03:49:02.981058 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:02Z","lastTransitionTime":"2026-01-31T03:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.084374 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.084445 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.084462 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.084489 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.084507 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:03Z","lastTransitionTime":"2026-01-31T03:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.187627 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.187672 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.187684 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.187702 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.187714 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:03Z","lastTransitionTime":"2026-01-31T03:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.217595 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.217642 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.217652 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.217672 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.217686 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:03Z","lastTransitionTime":"2026-01-31T03:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:03 crc kubenswrapper[4667]: E0131 03:49:03.230903 4667 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b790e77-6566-44ce-a51f-ed9234cccb89\\\",\\\"systemUUID\\\":\\\"53d28e89-fb25-47fd-9db4-43074284604e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:03Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.235086 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.235160 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.235185 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.235215 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.235242 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:03Z","lastTransitionTime":"2026-01-31T03:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:03 crc kubenswrapper[4667]: E0131 03:49:03.248641 4667 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b790e77-6566-44ce-a51f-ed9234cccb89\\\",\\\"systemUUID\\\":\\\"53d28e89-fb25-47fd-9db4-43074284604e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:03Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.253498 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.253541 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.253569 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.253589 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.253608 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:03Z","lastTransitionTime":"2026-01-31T03:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.253731 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 04:07:19.555620491 +0000 UTC Jan 31 03:49:03 crc kubenswrapper[4667]: E0131 03:49:03.267034 4667 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b790e77-6566-44ce-a51f-ed9234cccb89\\\",\\\"systemUUID\\\":\\\"53d28e89-fb25-47fd-9db4-43074284604e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:03Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.270825 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.270932 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.270960 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.270995 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.271019 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:03Z","lastTransitionTime":"2026-01-31T03:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.282145 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:49:03 crc kubenswrapper[4667]: E0131 03:49:03.282308 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:49:03 crc kubenswrapper[4667]: E0131 03:49:03.285209 4667 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b790e77-6566-44ce-a51f-ed9234cccb89\\\",\\\"systemUUID\\\":\\\"53d28e89-fb25-47fd-9db4-43074284604e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:03Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.290690 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.290924 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.290934 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.290951 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.290960 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:03Z","lastTransitionTime":"2026-01-31T03:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:03 crc kubenswrapper[4667]: E0131 03:49:03.304914 4667 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b790e77-6566-44ce-a51f-ed9234cccb89\\\",\\\"systemUUID\\\":\\\"53d28e89-fb25-47fd-9db4-43074284604e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:03Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:03 crc kubenswrapper[4667]: E0131 03:49:03.305033 4667 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.306900 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.306924 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.306934 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.306950 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.306969 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:03Z","lastTransitionTime":"2026-01-31T03:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.409788 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.409857 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.409867 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.409885 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.409896 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:03Z","lastTransitionTime":"2026-01-31T03:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.512196 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.512233 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.512242 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.512256 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.512266 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:03Z","lastTransitionTime":"2026-01-31T03:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.615138 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.615194 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.615207 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.615231 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.615244 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:03Z","lastTransitionTime":"2026-01-31T03:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.717720 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.717791 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.717801 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.717818 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.717830 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:03Z","lastTransitionTime":"2026-01-31T03:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.820571 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.820646 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.820658 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.820680 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.820697 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:03Z","lastTransitionTime":"2026-01-31T03:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.923607 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.923653 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.923664 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.923682 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:03 crc kubenswrapper[4667]: I0131 03:49:03.923694 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:03Z","lastTransitionTime":"2026-01-31T03:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.026556 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.026618 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.026636 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.026659 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.026678 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:04Z","lastTransitionTime":"2026-01-31T03:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.129560 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.129621 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.129634 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.129653 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.129666 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:04Z","lastTransitionTime":"2026-01-31T03:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.232499 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.232557 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.232568 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.232588 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.232599 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:04Z","lastTransitionTime":"2026-01-31T03:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.254874 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 12:51:34.471040059 +0000 UTC Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.281268 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.281347 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:49:04 crc kubenswrapper[4667]: E0131 03:49:04.281410 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:49:04 crc kubenswrapper[4667]: E0131 03:49:04.281563 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.281365 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:49:04 crc kubenswrapper[4667]: E0131 03:49:04.281682 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.337034 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.337086 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.337096 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.337112 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.337126 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:04Z","lastTransitionTime":"2026-01-31T03:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.440512 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.440574 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.440583 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.440603 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.440615 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:04Z","lastTransitionTime":"2026-01-31T03:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.542894 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.542946 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.542956 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.542974 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.542989 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:04Z","lastTransitionTime":"2026-01-31T03:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.645539 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.645597 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.645607 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.645629 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.645642 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:04Z","lastTransitionTime":"2026-01-31T03:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.747715 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.747764 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.747774 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.747794 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.747805 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:04Z","lastTransitionTime":"2026-01-31T03:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.850963 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.851011 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.851020 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.851038 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.851048 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:04Z","lastTransitionTime":"2026-01-31T03:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.953580 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.953633 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.953643 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.953661 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:04 crc kubenswrapper[4667]: I0131 03:49:04.953674 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:04Z","lastTransitionTime":"2026-01-31T03:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.055928 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.055986 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.056003 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.056027 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.056048 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:05Z","lastTransitionTime":"2026-01-31T03:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.158715 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.158773 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.158783 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.158799 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.158809 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:05Z","lastTransitionTime":"2026-01-31T03:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.255531 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 00:20:21.679918298 +0000 UTC Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.261101 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.261132 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.261141 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.261156 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.261166 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:05Z","lastTransitionTime":"2026-01-31T03:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.280919 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:49:05 crc kubenswrapper[4667]: E0131 03:49:05.281065 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.281920 4667 scope.go:117] "RemoveContainer" containerID="bd8eb04c461cb43803302aaa3b6a93643b780598fe63f798a8834d1c762040d3" Jan 31 03:49:05 crc kubenswrapper[4667]: E0131 03:49:05.282139 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jhj5n_openshift-ovn-kubernetes(3d685ba5-5ff5-4e74-8d02-99a233fc6c9b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.363901 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.363945 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.363957 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.363976 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.363989 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:05Z","lastTransitionTime":"2026-01-31T03:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.465997 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.466049 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.466065 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.466085 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.466096 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:05Z","lastTransitionTime":"2026-01-31T03:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.569171 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.569522 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.569592 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.569658 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.569722 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:05Z","lastTransitionTime":"2026-01-31T03:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.672390 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.672454 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.672466 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.672496 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.672514 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:05Z","lastTransitionTime":"2026-01-31T03:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.775558 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.775892 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.775993 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.776079 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.776139 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:05Z","lastTransitionTime":"2026-01-31T03:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.878927 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.878972 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.878982 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.878999 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.879011 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:05Z","lastTransitionTime":"2026-01-31T03:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.981803 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.982095 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.982175 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.982243 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:05 crc kubenswrapper[4667]: I0131 03:49:05.982316 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:05Z","lastTransitionTime":"2026-01-31T03:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.085411 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.085456 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.085466 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.085484 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.085494 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:06Z","lastTransitionTime":"2026-01-31T03:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.188014 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.188068 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.188081 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.188101 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.188115 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:06Z","lastTransitionTime":"2026-01-31T03:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.255911 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 21:40:34.39664848 +0000 UTC Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.281198 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.281223 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.281257 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:49:06 crc kubenswrapper[4667]: E0131 03:49:06.281494 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:49:06 crc kubenswrapper[4667]: E0131 03:49:06.281769 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:49:06 crc kubenswrapper[4667]: E0131 03:49:06.281921 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.290763 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.290893 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.290987 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.291084 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.291168 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:06Z","lastTransitionTime":"2026-01-31T03:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.394308 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.394353 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.394385 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.394416 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.394428 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:06Z","lastTransitionTime":"2026-01-31T03:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.497104 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.497171 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.497183 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.497202 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.497214 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:06Z","lastTransitionTime":"2026-01-31T03:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.600329 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.600442 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.600454 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.600469 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.600485 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:06Z","lastTransitionTime":"2026-01-31T03:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.703384 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.703422 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.703433 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.703451 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.703463 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:06Z","lastTransitionTime":"2026-01-31T03:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.806109 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.806173 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.806185 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.806202 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.806214 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:06Z","lastTransitionTime":"2026-01-31T03:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.908648 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.908686 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.908694 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.908709 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:06 crc kubenswrapper[4667]: I0131 03:49:06.908719 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:06Z","lastTransitionTime":"2026-01-31T03:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.010891 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.010928 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.010938 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.010953 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.010965 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:07Z","lastTransitionTime":"2026-01-31T03:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.112733 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.112773 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.112783 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.112800 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.112810 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:07Z","lastTransitionTime":"2026-01-31T03:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.214630 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.214664 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.214673 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.214687 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.214696 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:07Z","lastTransitionTime":"2026-01-31T03:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.256448 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 17:31:00.97305455 +0000 UTC Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.280981 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:49:07 crc kubenswrapper[4667]: E0131 03:49:07.281129 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.291262 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ad8c36-abaf-4c43-a606-0ba3332c5923\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b210ad25dbcd4bf7b51c2f927b5ca85daf9baccfc9d52bbc588be0116b0f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74f97be14eb7d701db876925386940db52004c3cd69931268f857f10ce702c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f97be14eb7d701db876925386940db52004c3cd69931268f857f10ce702c39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:07Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.305575 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e8e5fbf5b62418d8b08ccaafaf9f565b19d0d1ab8dc1ad4151af14790cf4aa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:07Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.317478 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.317573 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.317596 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.317643 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.317660 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:07Z","lastTransitionTime":"2026-01-31T03:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.320992 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10ccda3-d9b2-4d01-897a-8498aee530b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 03:48:15.785649 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 03:48:15.786510 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:48:15.790183 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1119535395/tls.crt::/tmp/serving-cert-1119535395/tls.key\\\\\\\"\\\\nI0131 03:48:16.086916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 03:48:16.089052 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 03:48:16.089068 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 03:48:16.089086 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 03:48:16.089091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 03:48:16.097787 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 03:48:16.097804 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 03:48:16.097818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 03:48:16.097822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 03:48:16.097825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 03:48:16.098030 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 03:48:16.100791 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:07Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.334560 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:07Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.348551 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:07Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.361179 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7daf4c78db3e0b9f6629c1ae75a3dad90a19d8f830bc4e3db8b48c852b3485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:07Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.379910 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523e97dbbec93313d682bbe37cf3b8cf49936d91c8f60915bf1d8849bd53f4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1730e8905dbea5ca3056d2002abe78755bdca22f3fbd66a11bb6c000b2289945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:07Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.401903 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50870207-38dd-40d0-8a53-0eaa3af9d1fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1c7d3d73b43c4c32aba4ba0704c399d72ff80eff878183b5791be243b17bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:07Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.414275 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97e83040-6e53-4c9c-afda-c21bee92d1b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5d85015202ca538e52ac5ea41e417dd6c76f81b7191007983ec9bf7fde68eb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292wp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:07Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.420118 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.420149 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.420161 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.420179 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.420190 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:07Z","lastTransitionTime":"2026-01-31T03:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.426821 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:07Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.439929 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b103bbd2-fb5d-4b2a-8b01-c32f699757df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9ff867bc008c324ad624ff71dcbf4f93b48146483c828ce43d1c10de40b0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://298f76d02f4ede118feca9fc2d4c9c073e2331174dcf673208ed96478b74232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9b7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:07Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.451698 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ns977" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57dcb541-6b8f-4730-9fd8-7ce27870e3a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559ff674832b9bb990309a535c9afb11a4f629b263495bc86311c24730b1a8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccvwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ns977\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:07Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.465042 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd764" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b069c8d1-f785-4509-8ee6-7d44525bdc89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3014a6072d180863fd8be274b221dc47c9cd792188b8bc80621db1892ffdf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n8wnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd764\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:07Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.476484 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n5jv7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a24385e-62ca-4a82-8995-9f20115931c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp2s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp2s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n5jv7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:07Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.496789 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f495ddf-247c-4cac-979b-710342a770f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8127777e243fb5e93d9dd430fb28ccc91a340dfd6b4169ebac2f3167e5ea1660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ce78e24e1cbf1115918bbd93da300b4efa5434f21bf1a11669f702a894f64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b94e5ba5276aa39d01479c1eb697edafb939d0e62ec593eed1628e7735e95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e068f8011041fbb83af5bf15d9f856fb111b3fd48d3707507df895249b125646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586bfc35d3a6f331a069b76d004135156f1b13db4afcf14f1404cba6c4ec3627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:07Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.510408 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af345b03-7933-405e-9918-4dfa4559aba8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572c8933d715b77d472cb5f4c1e3c78d3a5d9dd6857a061f4db5292274041429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93540db06524b42380aa14ebbb64ece6e98cf8104ccc5930d58ae980e41d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fad2057c1b38b9a7628137d033413b768ea2ff18e1ece27c3db4f9279009ad9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46df3e9a1466ef303cf6f7c703ee28b993ea1ad08bdc870c4298be0ba0804d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:07Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.522104 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa0d7266-def3-467f-8ea8-8bb9d7364385\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f2362cecfaa0886df1bf67ce2fe0bc1f9586a785228c776daa0062302ae5f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a145cfd5492e6e2c3168e54747f3699b5148950bf88dc0431699e0dc6ff4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec15f3fe2b9b1c6827bc9093c19c1fe8cba5dc2aa0db3289e0a0b7029b8b09c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901a09c39328d4cd2c2abdccd1928b5f1554d953b1271349cbdf179f93eaa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901a09c39328d4cd2c2abdccd1928b5f1554d953b1271349cbdf179f93eaa4be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:07Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.523256 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.523286 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.523295 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.523311 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.523338 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:07Z","lastTransitionTime":"2026-01-31T03:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.545195 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd8eb04c461cb43803302aaa3b6a93643b780598fe63f798a8834d1c762040d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd8eb04c461cb43803302aaa3b6a93643b780598fe63f798a8834d1c762040d3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:48:52Z\\\",\\\"message\\\":\\\"ift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:mcc-proxy-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0073e06ab \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9001,TargetPort:{0 9001 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-config-controller,},ClusterIP:10.217.5.16,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.16],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0131 03:48:52.267064 6240 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jhj5n_openshift-ovn-kubernetes(3d685ba5-5ff5-4e74-8d02-99a233fc6c9b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhj5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:07Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.560266 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4q9qz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3920ffb2-08f3-440b-bc6c-319a57bbe195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27cd88c349d4786018ab6ae21d45b22cdb95054c0b188bdce8cf97c53c09c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlwd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e3a943070029bd6e98682f2a4b3cfc0ab26dc2e9e7ab5179a60316923dbad33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlwd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4q9qz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:07Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.625891 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.625944 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.625954 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.625972 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.625982 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:07Z","lastTransitionTime":"2026-01-31T03:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.739059 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.739126 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.739135 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.739148 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.739157 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:07Z","lastTransitionTime":"2026-01-31T03:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.842303 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.842340 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.842352 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.842371 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.842385 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:07Z","lastTransitionTime":"2026-01-31T03:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.944961 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.945000 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.945009 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.945023 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:07 crc kubenswrapper[4667]: I0131 03:49:07.945034 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:07Z","lastTransitionTime":"2026-01-31T03:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.048334 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.048377 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.048388 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.048405 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.048416 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:08Z","lastTransitionTime":"2026-01-31T03:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.151315 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.151350 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.151359 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.151374 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.151383 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:08Z","lastTransitionTime":"2026-01-31T03:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.253754 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.253807 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.253824 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.253874 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.253892 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:08Z","lastTransitionTime":"2026-01-31T03:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.256949 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 09:25:16.274163567 +0000 UTC Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.281569 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.281664 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.281580 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:49:08 crc kubenswrapper[4667]: E0131 03:49:08.281939 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:49:08 crc kubenswrapper[4667]: E0131 03:49:08.282132 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:49:08 crc kubenswrapper[4667]: E0131 03:49:08.282286 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.356757 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.357006 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.357112 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.357252 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.357366 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:08Z","lastTransitionTime":"2026-01-31T03:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.460058 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.460692 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.460757 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.460857 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.460923 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:08Z","lastTransitionTime":"2026-01-31T03:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.563500 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.563559 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.563575 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.563594 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.563606 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:08Z","lastTransitionTime":"2026-01-31T03:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.666621 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.666684 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.666694 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.666710 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.666719 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:08Z","lastTransitionTime":"2026-01-31T03:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.769784 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.769870 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.769888 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.769994 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.770022 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:08Z","lastTransitionTime":"2026-01-31T03:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.872151 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.872203 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.872218 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.872240 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.872254 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:08Z","lastTransitionTime":"2026-01-31T03:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.975894 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.976195 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.976338 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.976476 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:08 crc kubenswrapper[4667]: I0131 03:49:08.976586 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:08Z","lastTransitionTime":"2026-01-31T03:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.079778 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.079827 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.079853 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.079873 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.079883 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:09Z","lastTransitionTime":"2026-01-31T03:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.182754 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.182905 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.182924 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.182948 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.182971 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:09Z","lastTransitionTime":"2026-01-31T03:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.257392 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 11:43:40.902674702 +0000 UTC Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.281099 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:49:09 crc kubenswrapper[4667]: E0131 03:49:09.281418 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.285645 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.285866 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.285974 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.286094 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.286189 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:09Z","lastTransitionTime":"2026-01-31T03:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.389573 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.389622 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.389631 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.389649 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.389658 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:09Z","lastTransitionTime":"2026-01-31T03:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.491823 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.492408 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.492481 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.492735 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.492809 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:09Z","lastTransitionTime":"2026-01-31T03:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.580932 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a24385e-62ca-4a82-8995-9f20115931c4-metrics-certs\") pod \"network-metrics-daemon-n5jv7\" (UID: \"4a24385e-62ca-4a82-8995-9f20115931c4\") " pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:49:09 crc kubenswrapper[4667]: E0131 03:49:09.581165 4667 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 03:49:09 crc kubenswrapper[4667]: E0131 03:49:09.581271 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a24385e-62ca-4a82-8995-9f20115931c4-metrics-certs podName:4a24385e-62ca-4a82-8995-9f20115931c4 nodeName:}" failed. No retries permitted until 2026-01-31 03:49:41.581248008 +0000 UTC m=+105.097583307 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4a24385e-62ca-4a82-8995-9f20115931c4-metrics-certs") pod "network-metrics-daemon-n5jv7" (UID: "4a24385e-62ca-4a82-8995-9f20115931c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.595586 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.595616 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.595624 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.595639 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.595649 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:09Z","lastTransitionTime":"2026-01-31T03:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.697835 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.698199 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.698296 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.698368 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.698432 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:09Z","lastTransitionTime":"2026-01-31T03:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.800363 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.800401 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.800412 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.800426 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.800434 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:09Z","lastTransitionTime":"2026-01-31T03:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.902299 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.902326 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.902334 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.902346 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:09 crc kubenswrapper[4667]: I0131 03:49:09.902356 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:09Z","lastTransitionTime":"2026-01-31T03:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.007040 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.007079 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.007091 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.007106 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.007120 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:10Z","lastTransitionTime":"2026-01-31T03:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.110172 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.110260 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.110270 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.110292 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.110302 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:10Z","lastTransitionTime":"2026-01-31T03:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.213017 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.213058 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.213068 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.213106 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.213116 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:10Z","lastTransitionTime":"2026-01-31T03:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.290548 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 02:34:45.891833779 +0000 UTC Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.291131 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:49:10 crc kubenswrapper[4667]: E0131 03:49:10.298068 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.298237 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:49:10 crc kubenswrapper[4667]: E0131 03:49:10.298515 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.298248 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:49:10 crc kubenswrapper[4667]: E0131 03:49:10.299228 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.314817 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.314865 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.314874 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.314889 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.314900 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:10Z","lastTransitionTime":"2026-01-31T03:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.417654 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.417687 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.417695 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.417711 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.417720 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:10Z","lastTransitionTime":"2026-01-31T03:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.519827 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.520130 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.520140 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.520155 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.520165 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:10Z","lastTransitionTime":"2026-01-31T03:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.622624 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.622965 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.623090 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.623208 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.623331 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:10Z","lastTransitionTime":"2026-01-31T03:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.726445 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.726483 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.726495 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.726515 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.726526 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:10Z","lastTransitionTime":"2026-01-31T03:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.756338 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cd764_b069c8d1-f785-4509-8ee6-7d44525bdc89/kube-multus/0.log" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.756396 4667 generic.go:334] "Generic (PLEG): container finished" podID="b069c8d1-f785-4509-8ee6-7d44525bdc89" containerID="3014a6072d180863fd8be274b221dc47c9cd792188b8bc80621db1892ffdf64a" exitCode=1 Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.756469 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cd764" event={"ID":"b069c8d1-f785-4509-8ee6-7d44525bdc89","Type":"ContainerDied","Data":"3014a6072d180863fd8be274b221dc47c9cd792188b8bc80621db1892ffdf64a"} Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.757163 4667 scope.go:117] "RemoveContainer" containerID="3014a6072d180863fd8be274b221dc47c9cd792188b8bc80621db1892ffdf64a" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.769217 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ns977" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57dcb541-6b8f-4730-9fd8-7ce27870e3a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559ff674832b9bb990309a535c9afb11a4f629b263495bc86311c24730b1a8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccvwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ns977\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.781572 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd764" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b069c8d1-f785-4509-8ee6-7d44525bdc89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3014a6072d180863fd8be274b221dc47c9cd792188b8bc80621db1892ffdf64a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3014a6072d180863fd8be274b221dc47c9cd792188b8bc80621db1892ffdf64a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:49:10Z\\\",\\\"message\\\":\\\"2026-01-31T03:48:25+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9bb64c19-df16-4367-ac81-2cae05fe0d99\\\\n2026-01-31T03:48:25+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9bb64c19-df16-4367-ac81-2cae05fe0d99 to /host/opt/cni/bin/\\\\n2026-01-31T03:48:25Z [verbose] multus-daemon started\\\\n2026-01-31T03:48:25Z [verbose] Readiness Indicator file check\\\\n2026-01-31T03:49:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n8wnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd764\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.791224 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n5jv7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a24385e-62ca-4a82-8995-9f20115931c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp2s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp2s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n5jv7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.805770 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.818153 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b103bbd2-fb5d-4b2a-8b01-c32f699757df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9ff867bc008c324ad624ff71dcbf4f93b48146483c828ce43d1c10de40b0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://298f76d02f4ede118feca9fc2d4c9c073e2331174dcf673208ed96478b74232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9b7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.828342 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.828394 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.828404 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.828422 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.828433 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:10Z","lastTransitionTime":"2026-01-31T03:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.836371 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa0d7266-def3-467f-8ea8-8bb9d7364385\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f2362cecfaa0886df1bf67ce2fe0bc1f9586a785228c776daa0062302ae5f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a145cfd5492e6e2c3168e54747f3699b5148950bf88dc0431699e0dc6ff4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec15f3fe2b9b1c6827bc9093c19c1fe8cba5dc2aa0db3289e0a0b7029b8b09c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901a09c39328d4cd2c2abdccd1928b5f1554d953b1271349cbdf179f93eaa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901a09c39328d4cd2c2abdccd1928b5f1554d953b1271349cbdf179f93eaa4be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.871924 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd8eb04c461cb43803302aaa3b6a93643b780598fe63f798a8834d1c762040d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd8eb04c461cb43803302aaa3b6a93643b780598fe63f798a8834d1c762040d3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:48:52Z\\\",\\\"message\\\":\\\"ift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:mcc-proxy-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0073e06ab \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9001,TargetPort:{0 9001 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-config-controller,},ClusterIP:10.217.5.16,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.16],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0131 03:48:52.267064 6240 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jhj5n_openshift-ovn-kubernetes(3d685ba5-5ff5-4e74-8d02-99a233fc6c9b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhj5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.886689 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4q9qz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3920ffb2-08f3-440b-bc6c-319a57bbe195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27cd88c349d4786018ab6ae21d45b22cdb95054c0b188bdce8cf97c53c09c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlwd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e3a943070029bd6e98682f2a4b3cfc0ab26dc2e9e7ab5179a60316923dbad33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlwd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4q9qz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.909642 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f495ddf-247c-4cac-979b-710342a770f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8127777e243fb5e93d9dd430fb28ccc91a340dfd6b4169ebac2f3167e5ea1660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ce78e24e1cbf1115918bbd93da300b4efa5434f21bf1a11669f702a894f64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b94e5ba5276aa39d01479c1eb697edafb939d0e62ec593eed1628e7735e95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e068f8011041fbb83af5bf15d9f856fb111b3fd48d3707507df895249b125646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586bfc35d3a6f331a069b76d004135156f1b13db4afcf14f1404cba6c4ec3627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.924105 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af345b03-7933-405e-9918-4dfa4559aba8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572c8933d715b77d472cb5f4c1e3c78d3a5d9dd6857a061f4db5292274041429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93540db06524b42380aa14ebbb64ece6e98cf8104ccc5930d58ae980e41d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fad2057c1b38b9a7628137d033413b768ea2ff18e1ece27c3db4f9279009ad9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46df3e9a1466ef303cf6f7c703ee28b993ea1ad08bdc870c4298be0ba0804d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.931253 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.931291 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.931302 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.931321 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.931333 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:10Z","lastTransitionTime":"2026-01-31T03:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.936560 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ad8c36-abaf-4c43-a606-0ba3332c5923\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b210ad25dbcd4bf7b51c2f927b5ca85daf9baccfc9d52bbc588be0116b0f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74f97be14eb7d701db876925386940db52004c3cd69931268f857f10ce702c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f97be14eb7d701db876925386940db52004c3cd69931268f857f10ce702c39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.951010 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e8e5fbf5b62418d8b08ccaafaf9f565b19d0d1ab8dc1ad4151af14790cf4aa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.966332 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.978916 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7daf4c78db3e0b9f6629c1ae75a3dad90a19d8f830bc4e3db8b48c852b3485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:10 crc kubenswrapper[4667]: I0131 03:49:10.990505 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523e97dbbec93313d682bbe37cf3b8cf49936d91c8f60915bf1d8849bd53f4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1730e8905dbea5ca3056d2002abe78755bdca22f3fbd66a11bb6c000b2289945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:10Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.003098 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50870207-38dd-40d0-8a53-0eaa3af9d1fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1c7d3d73b43c4c32aba4ba0704c399d72ff80eff878183b5791be243b17bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.011621 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97e83040-6e53-4c9c-afda-c21bee92d1b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5d85015202ca538e52ac5ea41e417dd6c76f81b7191007983ec9bf7fde68eb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292wp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.024096 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10ccda3-d9b2-4d01-897a-8498aee530b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 03:48:15.785649 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 03:48:15.786510 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:48:15.790183 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1119535395/tls.crt::/tmp/serving-cert-1119535395/tls.key\\\\\\\"\\\\nI0131 03:48:16.086916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 03:48:16.089052 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 03:48:16.089068 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 03:48:16.089086 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 03:48:16.089091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 03:48:16.097787 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 03:48:16.097804 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 03:48:16.097818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 03:48:16.097822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 03:48:16.097825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 03:48:16.098030 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 03:48:16.100791 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.033929 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.033969 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.033980 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.033995 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.034006 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:11Z","lastTransitionTime":"2026-01-31T03:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.037526 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.136703 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.136740 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.136750 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.136764 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.136774 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:11Z","lastTransitionTime":"2026-01-31T03:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.239647 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.239686 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.239697 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.239713 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.239722 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:11Z","lastTransitionTime":"2026-01-31T03:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.281346 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:49:11 crc kubenswrapper[4667]: E0131 03:49:11.281669 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.291029 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 04:00:52.335051039 +0000 UTC Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.341352 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.341381 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.341390 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.341406 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.341416 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:11Z","lastTransitionTime":"2026-01-31T03:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.443752 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.443814 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.443823 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.443835 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.443934 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:11Z","lastTransitionTime":"2026-01-31T03:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.546303 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.546350 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.546360 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.546377 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.546387 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:11Z","lastTransitionTime":"2026-01-31T03:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.648508 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.648542 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.648553 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.648567 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.648577 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:11Z","lastTransitionTime":"2026-01-31T03:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.751531 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.751572 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.751581 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.751597 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.751607 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:11Z","lastTransitionTime":"2026-01-31T03:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.767035 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cd764_b069c8d1-f785-4509-8ee6-7d44525bdc89/kube-multus/0.log" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.767149 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cd764" event={"ID":"b069c8d1-f785-4509-8ee6-7d44525bdc89","Type":"ContainerStarted","Data":"370b5296f121631f739cdba4f61f648a9f00aec73518549365ffd970bea8db8d"} Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.782006 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97e83040-6e53-4c9c-afda-c21bee92d1b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5d85015202ca538e52ac5ea41e417dd6c76f81b7191007983ec9bf7fde68eb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292wp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.797309 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10ccda3-d9b2-4d01-897a-8498aee530b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 03:48:15.785649 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 03:48:15.786510 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:48:15.790183 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1119535395/tls.crt::/tmp/serving-cert-1119535395/tls.key\\\\\\\"\\\\nI0131 03:48:16.086916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 03:48:16.089052 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 03:48:16.089068 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 03:48:16.089086 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 03:48:16.089091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 03:48:16.097787 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 03:48:16.097804 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 03:48:16.097818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 03:48:16.097822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 03:48:16.097825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 03:48:16.098030 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 03:48:16.100791 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.809320 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.824638 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.835556 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7daf4c78db3e0b9f6629c1ae75a3dad90a19d8f830bc4e3db8b48c852b3485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.853255 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523e97dbbec93313d682bbe37cf3b8cf49936d91c8f60915bf1d8849bd53f4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1730e8905dbea5ca3056d2002abe78755bdca22f3fbd66a11bb6c000b2289945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.853783 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.853811 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.853819 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.853833 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.853856 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:11Z","lastTransitionTime":"2026-01-31T03:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.874404 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50870207-38dd-40d0-8a53-0eaa3af9d1fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1c7d3d73b43c4c32aba4ba0704c399d72ff80eff878183b5791be243b17bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.891709 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.904832 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b103bbd2-fb5d-4b2a-8b01-c32f699757df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9ff867bc008c324ad624ff71dcbf4f93b48146483c828ce43d1c10de40b0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://298f76d02f4ede118feca9fc2d4c9c073e2331174dcf673208ed96478b74232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9b7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.916237 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ns977" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57dcb541-6b8f-4730-9fd8-7ce27870e3a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559ff674832b9bb990309a535c9afb11a4f629b263495bc86311c24730b1a8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccvwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ns977\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.930573 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd764" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b069c8d1-f785-4509-8ee6-7d44525bdc89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370b5296f121631f739cdba4f61f648a9f00aec73518549365ffd970bea8db8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3014a6072d180863fd8be274b221dc47c9cd792188b8bc80621db1892ffdf64a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:49:10Z\\\",\\\"message\\\":\\\"2026-01-31T03:48:25+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9bb64c19-df16-4367-ac81-2cae05fe0d99\\\\n2026-01-31T03:48:25+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9bb64c19-df16-4367-ac81-2cae05fe0d99 to /host/opt/cni/bin/\\\\n2026-01-31T03:48:25Z [verbose] multus-daemon started\\\\n2026-01-31T03:48:25Z [verbose] Readiness Indicator file check\\\\n2026-01-31T03:49:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n8wnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd764\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.941983 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n5jv7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a24385e-62ca-4a82-8995-9f20115931c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp2s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp2s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n5jv7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.956160 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.956218 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.956235 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.956259 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.956273 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:11Z","lastTransitionTime":"2026-01-31T03:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.963060 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f495ddf-247c-4cac-979b-710342a770f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8127777e243fb5e93d9dd430fb28ccc91a340dfd6b4169ebac2f3167e5ea1660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ce78e24e1cbf1115918bbd93da300b4efa5434f21bf1a11669f702a894f64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b94e5ba5276aa39d01479c1eb697edafb939d0e62ec593eed1628e7735e95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e068f8011041fbb83af5bf15d9f856fb111b3fd48d3707507df895249b125646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586bfc35d3a6f331a069b76d004135156f1b13db4afcf14f1404cba6c4ec3627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.977233 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af345b03-7933-405e-9918-4dfa4559aba8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572c8933d715b77d472cb5f4c1e3c78d3a5d9dd6857a061f4db5292274041429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93540db06524b42380aa14ebbb64ece6e98cf8104ccc5930d58ae980e41d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fad2057c1b38b9a7628137d033413b768ea2ff18e1ece27c3db4f9279009ad9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46df3e9a1466ef303cf6f7c703ee28b993ea1ad08bdc870c4298be0ba0804d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:11 crc kubenswrapper[4667]: I0131 03:49:11.989428 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa0d7266-def3-467f-8ea8-8bb9d7364385\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f2362cecfaa0886df1bf67ce2fe0bc1f9586a785228c776daa0062302ae5f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a145cfd5492e6e2c3168e54747f3699b5148950bf88dc0431699e0dc6ff4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec15f3fe2b9b1c6827bc9093c19c1fe8cba5dc2aa0db3289e0a0b7029b8b09c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901a09c39328d4cd2c2abdccd1928b5f1554d953b1271349cbdf179f93eaa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901a09c39328d4cd2c2abdccd1928b5f1554d953b1271349cbdf179f93eaa4be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:11Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.007890 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd8eb04c461cb43803302aaa3b6a93643b780598fe63f798a8834d1c762040d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd8eb04c461cb43803302aaa3b6a93643b780598fe63f798a8834d1c762040d3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:48:52Z\\\",\\\"message\\\":\\\"ift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:mcc-proxy-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0073e06ab \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9001,TargetPort:{0 9001 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-config-controller,},ClusterIP:10.217.5.16,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.16],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0131 03:48:52.267064 6240 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jhj5n_openshift-ovn-kubernetes(3d685ba5-5ff5-4e74-8d02-99a233fc6c9b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhj5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:12Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.020101 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4q9qz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3920ffb2-08f3-440b-bc6c-319a57bbe195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27cd88c349d4786018ab6ae21d45b22cdb95054c0b188bdce8cf97c53c09c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlwd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e3a943070029bd6e98682f2a4b3cfc0ab26dc2e9e7ab5179a60316923dbad33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlwd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4q9qz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:12Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.030991 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ad8c36-abaf-4c43-a606-0ba3332c5923\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b210ad25dbcd4bf7b51c2f927b5ca85daf9baccfc9d52bbc588be0116b0f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74f97be14eb7d701db876925386940db52004c3cd69931268f857f10ce702c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f97be14eb7d701db876925386940db52004c3cd69931268f857f10ce702c39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:12Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.046911 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e8e5fbf5b62418d8b08ccaafaf9f565b19d0d1ab8dc1ad4151af14790cf4aa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:12Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.058621 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.058686 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.058699 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.058715 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.058736 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:12Z","lastTransitionTime":"2026-01-31T03:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.161403 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.161451 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.161462 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.161478 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.161487 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:12Z","lastTransitionTime":"2026-01-31T03:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.264345 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.264394 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.264411 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.264434 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.264452 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:12Z","lastTransitionTime":"2026-01-31T03:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.280925 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.281025 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.281058 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:49:12 crc kubenswrapper[4667]: E0131 03:49:12.281138 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:49:12 crc kubenswrapper[4667]: E0131 03:49:12.281270 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:49:12 crc kubenswrapper[4667]: E0131 03:49:12.281337 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.292184 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 16:51:41.500731713 +0000 UTC Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.367441 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.367500 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.367517 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.367540 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.367556 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:12Z","lastTransitionTime":"2026-01-31T03:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.470933 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.470996 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.471015 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.471047 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.471069 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:12Z","lastTransitionTime":"2026-01-31T03:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.574252 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.574301 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.574310 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.574329 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.574339 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:12Z","lastTransitionTime":"2026-01-31T03:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.676618 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.676674 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.676709 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.676730 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.676741 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:12Z","lastTransitionTime":"2026-01-31T03:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.779189 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.779264 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.779280 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.779306 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.779330 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:12Z","lastTransitionTime":"2026-01-31T03:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.882340 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.882402 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.882418 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.882446 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.882520 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:12Z","lastTransitionTime":"2026-01-31T03:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.985076 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.985137 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.985156 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.985178 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:12 crc kubenswrapper[4667]: I0131 03:49:12.985193 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:12Z","lastTransitionTime":"2026-01-31T03:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.087468 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.087517 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.087528 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.087544 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.087554 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:13Z","lastTransitionTime":"2026-01-31T03:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.190169 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.190220 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.190238 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.190261 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.190274 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:13Z","lastTransitionTime":"2026-01-31T03:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.281264 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:49:13 crc kubenswrapper[4667]: E0131 03:49:13.281585 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.292311 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 21:59:19.473006179 +0000 UTC Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.292707 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.292750 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.292762 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.292781 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.292792 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:13Z","lastTransitionTime":"2026-01-31T03:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.396081 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.396132 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.396148 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.396168 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.396182 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:13Z","lastTransitionTime":"2026-01-31T03:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.499900 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.499971 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.499991 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.500050 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.500076 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:13Z","lastTransitionTime":"2026-01-31T03:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.610618 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.610695 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.610720 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.610752 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.610777 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:13Z","lastTransitionTime":"2026-01-31T03:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.630012 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.630077 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.630094 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.630120 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.630137 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:13Z","lastTransitionTime":"2026-01-31T03:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:13 crc kubenswrapper[4667]: E0131 03:49:13.652047 4667 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b790e77-6566-44ce-a51f-ed9234cccb89\\\",\\\"systemUUID\\\":\\\"53d28e89-fb25-47fd-9db4-43074284604e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:13Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.658776 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.658834 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.658877 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.658905 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.658927 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:13Z","lastTransitionTime":"2026-01-31T03:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:13 crc kubenswrapper[4667]: E0131 03:49:13.680193 4667 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b790e77-6566-44ce-a51f-ed9234cccb89\\\",\\\"systemUUID\\\":\\\"53d28e89-fb25-47fd-9db4-43074284604e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:13Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.686775 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.686832 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.686884 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.686909 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.686929 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:13Z","lastTransitionTime":"2026-01-31T03:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:13 crc kubenswrapper[4667]: E0131 03:49:13.709523 4667 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b790e77-6566-44ce-a51f-ed9234cccb89\\\",\\\"systemUUID\\\":\\\"53d28e89-fb25-47fd-9db4-43074284604e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:13Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.715500 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.715600 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.715618 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.715677 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.715699 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:13Z","lastTransitionTime":"2026-01-31T03:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:13 crc kubenswrapper[4667]: E0131 03:49:13.735143 4667 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b790e77-6566-44ce-a51f-ed9234cccb89\\\",\\\"systemUUID\\\":\\\"53d28e89-fb25-47fd-9db4-43074284604e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:13Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.739573 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.739608 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.739622 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.739642 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.739657 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:13Z","lastTransitionTime":"2026-01-31T03:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:13 crc kubenswrapper[4667]: E0131 03:49:13.754434 4667 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b790e77-6566-44ce-a51f-ed9234cccb89\\\",\\\"systemUUID\\\":\\\"53d28e89-fb25-47fd-9db4-43074284604e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:13Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:13 crc kubenswrapper[4667]: E0131 03:49:13.754592 4667 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.756347 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.756378 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.756392 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.756411 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.756426 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:13Z","lastTransitionTime":"2026-01-31T03:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.859211 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.859255 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.859265 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.859283 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.859295 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:13Z","lastTransitionTime":"2026-01-31T03:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.963041 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.963097 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.963115 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.963138 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:13 crc kubenswrapper[4667]: I0131 03:49:13.963155 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:13Z","lastTransitionTime":"2026-01-31T03:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.065894 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.065938 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.065947 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.065965 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.065977 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:14Z","lastTransitionTime":"2026-01-31T03:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.169477 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.169518 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.169529 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.169546 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.169557 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:14Z","lastTransitionTime":"2026-01-31T03:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.272269 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.272314 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.272327 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.272345 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.272357 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:14Z","lastTransitionTime":"2026-01-31T03:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.281181 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.281205 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:49:14 crc kubenswrapper[4667]: E0131 03:49:14.281380 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:49:14 crc kubenswrapper[4667]: E0131 03:49:14.281424 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.281976 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:49:14 crc kubenswrapper[4667]: E0131 03:49:14.282202 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.293125 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 07:02:10.651763023 +0000 UTC Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.374647 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.374979 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.375110 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.375215 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.375334 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:14Z","lastTransitionTime":"2026-01-31T03:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.478173 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.478219 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.478228 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.478243 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.478255 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:14Z","lastTransitionTime":"2026-01-31T03:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.581100 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.581137 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.581146 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.581163 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.581172 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:14Z","lastTransitionTime":"2026-01-31T03:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.683139 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.683176 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.683185 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.683200 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.683209 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:14Z","lastTransitionTime":"2026-01-31T03:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.785706 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.785759 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.785772 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.785791 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.785805 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:14Z","lastTransitionTime":"2026-01-31T03:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.888668 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.888985 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.889110 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.889205 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.889287 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:14Z","lastTransitionTime":"2026-01-31T03:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.993142 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.993202 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.993215 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.993237 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:14 crc kubenswrapper[4667]: I0131 03:49:14.993249 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:14Z","lastTransitionTime":"2026-01-31T03:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.095270 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.095316 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.095326 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.095349 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.095362 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:15Z","lastTransitionTime":"2026-01-31T03:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.197481 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.197716 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.197795 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.197916 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.197994 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:15Z","lastTransitionTime":"2026-01-31T03:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.281430 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:49:15 crc kubenswrapper[4667]: E0131 03:49:15.282041 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.294088 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 02:22:22.690380101 +0000 UTC Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.300458 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.300546 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.300566 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.300597 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.300621 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:15Z","lastTransitionTime":"2026-01-31T03:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.406998 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.407056 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.407068 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.407159 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.407192 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:15Z","lastTransitionTime":"2026-01-31T03:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.509360 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.509402 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.509417 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.509433 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.509445 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:15Z","lastTransitionTime":"2026-01-31T03:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.612278 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.612316 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.612327 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.612344 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.612356 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:15Z","lastTransitionTime":"2026-01-31T03:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.714737 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.714764 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.714771 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.714785 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.714795 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:15Z","lastTransitionTime":"2026-01-31T03:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.817036 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.817074 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.817086 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.817114 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.817127 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:15Z","lastTransitionTime":"2026-01-31T03:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.920029 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.920066 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.920075 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.920089 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:15 crc kubenswrapper[4667]: I0131 03:49:15.920099 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:15Z","lastTransitionTime":"2026-01-31T03:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.022696 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.022771 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.022797 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.022829 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.022881 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:16Z","lastTransitionTime":"2026-01-31T03:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.126182 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.126236 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.126248 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.126269 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.126282 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:16Z","lastTransitionTime":"2026-01-31T03:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.228588 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.228650 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.228671 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.228705 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.228728 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:16Z","lastTransitionTime":"2026-01-31T03:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.281552 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.281668 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.281550 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:49:16 crc kubenswrapper[4667]: E0131 03:49:16.281738 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:49:16 crc kubenswrapper[4667]: E0131 03:49:16.281904 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:49:16 crc kubenswrapper[4667]: E0131 03:49:16.282068 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.294828 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 05:39:16.206955344 +0000 UTC Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.331217 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.331258 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.331269 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.331286 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.331296 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:16Z","lastTransitionTime":"2026-01-31T03:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.434485 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.434550 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.434566 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.434592 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.434609 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:16Z","lastTransitionTime":"2026-01-31T03:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.536518 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.536552 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.536561 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.536575 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.536583 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:16Z","lastTransitionTime":"2026-01-31T03:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.639406 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.639486 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.639507 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.639537 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.639558 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:16Z","lastTransitionTime":"2026-01-31T03:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.741972 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.742040 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.742050 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.742113 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.742128 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:16Z","lastTransitionTime":"2026-01-31T03:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.844341 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.844381 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.844392 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.844408 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.844418 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:16Z","lastTransitionTime":"2026-01-31T03:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.948050 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.948112 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.948129 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.948157 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:16 crc kubenswrapper[4667]: I0131 03:49:16.948175 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:16Z","lastTransitionTime":"2026-01-31T03:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.053978 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.054051 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.054074 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.054110 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.054134 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:17Z","lastTransitionTime":"2026-01-31T03:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.156622 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.156676 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.156693 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.156719 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.156739 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:17Z","lastTransitionTime":"2026-01-31T03:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.259943 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.260295 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.260452 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.260664 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.260789 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:17Z","lastTransitionTime":"2026-01-31T03:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.281959 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:49:17 crc kubenswrapper[4667]: E0131 03:49:17.283330 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.296017 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 05:37:26.697728316 +0000 UTC Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.304523 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f495ddf-247c-4cac-979b-710342a770f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8127777e243fb5e93d9dd430fb28ccc91a340dfd6b4169ebac2f3167e5ea1660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ce78e24e1cbf1115918bbd93da300b4efa5434f21bf1a11669f702a894f64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b94e5ba5276aa39d01479c1eb697edafb939d0e62ec593eed1628e7735e95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e068f8011041fbb83af5bf15d9f856fb111b3fd48d3707507df895249b125646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586bfc35d3a6f331a069b76d004135156f1b13db4afcf14f1404cba6c4ec3627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.319651 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af345b03-7933-405e-9918-4dfa4559aba8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572c8933d715b77d472cb5f4c1e3c78d3a5d9dd6857a061f4db5292274041429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93540db06524b42380aa14ebbb64ece6e98cf8104ccc5930d58ae980e41d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fad2057c1b38b9a7628137d033413b768ea2ff18e1ece27c3db4f9279009ad9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46df3e9a1466ef303cf6f7c703ee28b993ea1ad08bdc870c4298be0ba0804d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.339038 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa0d7266-def3-467f-8ea8-8bb9d7364385\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f2362cecfaa0886df1bf67ce2fe0bc1f9586a785228c776daa0062302ae5f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a145cfd5492e6e2c3168e54747f3699b5148950bf88dc0431699e0dc6ff4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec15f3fe2b9b1c6827bc9093c19c1fe8cba5dc2aa0db3289e0a0b7029b8b09c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901a09c39328d4cd2c2abdccd1928b5f1554d953b1271349cbdf179f93eaa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901a09c39328d4cd2c2abdccd1928b5f1554d953b1271349cbdf179f93eaa4be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.364026 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.364188 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.364212 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.364240 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.364259 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:17Z","lastTransitionTime":"2026-01-31T03:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.367313 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd8eb04c461cb43803302aaa3b6a93643b780598fe63f798a8834d1c762040d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd8eb04c461cb43803302aaa3b6a93643b780598fe63f798a8834d1c762040d3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:48:52Z\\\",\\\"message\\\":\\\"ift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:mcc-proxy-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0073e06ab \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9001,TargetPort:{0 9001 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-config-controller,},ClusterIP:10.217.5.16,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.16],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0131 03:48:52.267064 6240 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jhj5n_openshift-ovn-kubernetes(3d685ba5-5ff5-4e74-8d02-99a233fc6c9b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhj5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.378576 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4q9qz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3920ffb2-08f3-440b-bc6c-319a57bbe195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27cd88c349d4786018ab6ae21d45b22cdb95054c0b188bdce8cf97c53c09c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlwd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e3a943070029bd6e98682f2a4b3cfc0ab26dc2e9e7ab5179a60316923dbad33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlwd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4q9qz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.390639 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ad8c36-abaf-4c43-a606-0ba3332c5923\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b210ad25dbcd4bf7b51c2f927b5ca85daf9baccfc9d52bbc588be0116b0f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74f97be14eb7d701db876925386940db52004c3cd69931268f857f10ce702c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f97be14eb7d701db876925386940db52004c3cd69931268f857f10ce702c39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.408140 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e8e5fbf5b62418d8b08ccaafaf9f565b19d0d1ab8dc1ad4151af14790cf4aa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.425684 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50870207-38dd-40d0-8a53-0eaa3af9d1fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1c7d3d73b43c4c32aba4ba0704c399d72ff80eff878183b5791be243b17bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.439538 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97e83040-6e53-4c9c-afda-c21bee92d1b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5d85015202ca538e52ac5ea41e417dd6c76f81b7191007983ec9bf7fde68eb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292wp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.457324 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10ccda3-d9b2-4d01-897a-8498aee530b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 03:48:15.785649 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 03:48:15.786510 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:48:15.790183 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1119535395/tls.crt::/tmp/serving-cert-1119535395/tls.key\\\\\\\"\\\\nI0131 03:48:16.086916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 03:48:16.089052 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 03:48:16.089068 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 03:48:16.089086 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 03:48:16.089091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 03:48:16.097787 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 03:48:16.097804 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 03:48:16.097818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 03:48:16.097822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 03:48:16.097825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 03:48:16.098030 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 03:48:16.100791 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.466367 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.466404 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.466416 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.466435 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.466448 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:17Z","lastTransitionTime":"2026-01-31T03:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.469049 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.479913 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.488959 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7daf4c78db3e0b9f6629c1ae75a3dad90a19d8f830bc4e3db8b48c852b3485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.498303 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523e97dbbec93313d682bbe37cf3b8cf49936d91c8f60915bf1d8849bd53f4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1730e8905dbea5ca3056d2002abe78755bdca22f3fbd66a11bb6c000b2289945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.507970 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.516969 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b103bbd2-fb5d-4b2a-8b01-c32f699757df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9ff867bc008c324ad624ff71dcbf4f93b48146483c828ce43d1c10de40b0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://298f76d02f4ede118feca9fc2d4c9c073e2331174dcf673208ed96478b74232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9b7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.524852 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ns977" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57dcb541-6b8f-4730-9fd8-7ce27870e3a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559ff674832b9bb990309a535c9afb11a4f629b263495bc86311c24730b1a8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccvwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ns977\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.535037 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd764" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b069c8d1-f785-4509-8ee6-7d44525bdc89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370b5296f121631f739cdba4f61f648a9f00aec73518549365ffd970bea8db8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3014a6072d180863fd8be274b221dc47c9cd792188b8bc80621db1892ffdf64a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:49:10Z\\\",\\\"message\\\":\\\"2026-01-31T03:48:25+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9bb64c19-df16-4367-ac81-2cae05fe0d99\\\\n2026-01-31T03:48:25+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9bb64c19-df16-4367-ac81-2cae05fe0d99 to /host/opt/cni/bin/\\\\n2026-01-31T03:48:25Z [verbose] multus-daemon started\\\\n2026-01-31T03:48:25Z [verbose] Readiness Indicator file check\\\\n2026-01-31T03:49:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n8wnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd764\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.544373 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n5jv7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a24385e-62ca-4a82-8995-9f20115931c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp2s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp2s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n5jv7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:17Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.569276 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.569325 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.569340 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.569365 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.569381 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:17Z","lastTransitionTime":"2026-01-31T03:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.671897 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.671948 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.671966 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.671992 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.672007 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:17Z","lastTransitionTime":"2026-01-31T03:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.774971 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.775008 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.775016 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.775031 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.775039 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:17Z","lastTransitionTime":"2026-01-31T03:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.884108 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.884382 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.884492 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.884620 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.884828 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:17Z","lastTransitionTime":"2026-01-31T03:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.988704 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.988772 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.988799 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.988830 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:17 crc kubenswrapper[4667]: I0131 03:49:17.988900 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:17Z","lastTransitionTime":"2026-01-31T03:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.091615 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.091652 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.091660 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.091674 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.091684 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:18Z","lastTransitionTime":"2026-01-31T03:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.193667 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.193708 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.193721 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.193737 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.193749 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:18Z","lastTransitionTime":"2026-01-31T03:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.281236 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.281243 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.281287 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:49:18 crc kubenswrapper[4667]: E0131 03:49:18.281856 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:49:18 crc kubenswrapper[4667]: E0131 03:49:18.281662 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:49:18 crc kubenswrapper[4667]: E0131 03:49:18.281777 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.282151 4667 scope.go:117] "RemoveContainer" containerID="bd8eb04c461cb43803302aaa3b6a93643b780598fe63f798a8834d1c762040d3" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.295426 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.295460 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.295473 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.295489 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.295503 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:18Z","lastTransitionTime":"2026-01-31T03:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.296507 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 19:13:56.003259125 +0000 UTC Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.398323 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.398371 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.398384 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.398407 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.398422 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:18Z","lastTransitionTime":"2026-01-31T03:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.501318 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.501387 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.501419 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.501438 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.501450 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:18Z","lastTransitionTime":"2026-01-31T03:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.603546 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.603587 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.603597 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.603613 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.603624 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:18Z","lastTransitionTime":"2026-01-31T03:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.706378 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.706438 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.706461 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.706484 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.706501 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:18Z","lastTransitionTime":"2026-01-31T03:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.792761 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jhj5n_3d685ba5-5ff5-4e74-8d02-99a233fc6c9b/ovnkube-controller/2.log" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.801251 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" event={"ID":"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b","Type":"ContainerStarted","Data":"f8ea9d94faf102adf3e8e0c6c13fc20da919f3b287704731c53453ac9fa045f2"} Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.801690 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.808818 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.808869 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.808881 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.808897 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.808909 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:18Z","lastTransitionTime":"2026-01-31T03:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.815549 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.827569 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b103bbd2-fb5d-4b2a-8b01-c32f699757df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9ff867bc008c324ad624ff71dcbf4f93b48146483c828ce43d1c10de40b0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://298f76d02f4ede118feca9fc2d4c9c073e2331174dcf673208ed96478b74232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9b7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.838546 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ns977" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57dcb541-6b8f-4730-9fd8-7ce27870e3a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559ff674832b9bb990309a535c9afb11a4f629b263495bc86311c24730b1a8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccvwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ns977\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.851252 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd764" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b069c8d1-f785-4509-8ee6-7d44525bdc89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370b5296f121631f739cdba4f61f648a9f00aec73518549365ffd970bea8db8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3014a6072d180863fd8be274b221dc47c9cd792188b8bc80621db1892ffdf64a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:49:10Z\\\",\\\"message\\\":\\\"2026-01-31T03:48:25+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9bb64c19-df16-4367-ac81-2cae05fe0d99\\\\n2026-01-31T03:48:25+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9bb64c19-df16-4367-ac81-2cae05fe0d99 to /host/opt/cni/bin/\\\\n2026-01-31T03:48:25Z [verbose] multus-daemon started\\\\n2026-01-31T03:48:25Z [verbose] Readiness Indicator file check\\\\n2026-01-31T03:49:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n8wnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd764\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.865971 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n5jv7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a24385e-62ca-4a82-8995-9f20115931c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp2s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp2s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n5jv7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.887732 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f495ddf-247c-4cac-979b-710342a770f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8127777e243fb5e93d9dd430fb28ccc91a340dfd6b4169ebac2f3167e5ea1660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ce78e24e1cbf1115918bbd93da300b4efa5434f21bf1a11669f702a894f64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b94e5ba5276aa39d01479c1eb697edafb939d0e62ec593eed1628e7735e95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e068f8011041fbb83af5bf15d9f856fb111b3fd48d3707507df895249b125646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586bfc35d3a6f331a069b76d004135156f1b13db4afcf14f1404cba6c4ec3627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.903327 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af345b03-7933-405e-9918-4dfa4559aba8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572c8933d715b77d472cb5f4c1e3c78d3a5d9dd6857a061f4db5292274041429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93540db06524b42380aa14ebbb64ece6e98cf8104ccc5930d58ae980e41d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fad2057c1b38b9a7628137d033413b768ea2ff18e1ece27c3db4f9279009ad9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46df3e9a1466ef303cf6f7c703ee28b993ea1ad08bdc870c4298be0ba0804d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.911407 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.911452 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.911463 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.911481 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.911495 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:18Z","lastTransitionTime":"2026-01-31T03:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.919223 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa0d7266-def3-467f-8ea8-8bb9d7364385\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f2362cecfaa0886df1bf67ce2fe0bc1f9586a785228c776daa0062302ae5f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a145cfd5492e6e2c3168e54747f3699b5148950bf88dc0431699e0dc6ff4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec15f3fe2b9b1c6827bc9093c19c1fe8cba5dc2aa0db3289e0a0b7029b8b09c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901a09c39328d4cd2c2abdccd1928b5f1554d953b1271349cbdf179f93eaa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901a09c39328d4cd2c2abdccd1928b5f1554d953b1271349cbdf179f93eaa4be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.944121 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8ea9d94faf102adf3e8e0c6c13fc20da919f3b287704731c53453ac9fa045f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd8eb04c461cb43803302aaa3b6a93643b780598fe63f798a8834d1c762040d3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:48:52Z\\\",\\\"message\\\":\\\"ift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:mcc-proxy-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0073e06ab \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9001,TargetPort:{0 9001 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-config-controller,},ClusterIP:10.217.5.16,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.16],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0131 03:48:52.267064 6240 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhj5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.956827 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4q9qz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3920ffb2-08f3-440b-bc6c-319a57bbe195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27cd88c349d4786018ab6ae21d45b22cdb95054c0b188bdce8cf97c53c09c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlwd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e3a943070029bd6e98682f2a4b3cfc0ab26dc2e9e7ab5179a60316923dbad33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlwd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4q9qz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.967779 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ad8c36-abaf-4c43-a606-0ba3332c5923\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b210ad25dbcd4bf7b51c2f927b5ca85daf9baccfc9d52bbc588be0116b0f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74f97be14eb7d701db876925386940db52004c3cd69931268f857f10ce702c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f97be14eb7d701db876925386940db52004c3cd69931268f857f10ce702c39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.981121 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e8e5fbf5b62418d8b08ccaafaf9f565b19d0d1ab8dc1ad4151af14790cf4aa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:18 crc kubenswrapper[4667]: I0131 03:49:18.994585 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10ccda3-d9b2-4d01-897a-8498aee530b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 03:48:15.785649 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 03:48:15.786510 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:48:15.790183 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1119535395/tls.crt::/tmp/serving-cert-1119535395/tls.key\\\\\\\"\\\\nI0131 03:48:16.086916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 03:48:16.089052 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 03:48:16.089068 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 03:48:16.089086 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 03:48:16.089091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 03:48:16.097787 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 03:48:16.097804 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 03:48:16.097818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 03:48:16.097822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 03:48:16.097825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 03:48:16.098030 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 03:48:16.100791 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:18Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.008585 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:19Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.013234 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.013274 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.013286 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.013305 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.013320 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:19Z","lastTransitionTime":"2026-01-31T03:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.022970 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:19Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.035547 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7daf4c78db3e0b9f6629c1ae75a3dad90a19d8f830bc4e3db8b48c852b3485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:19Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.051269 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523e97dbbec93313d682bbe37cf3b8cf49936d91c8f60915bf1d8849bd53f4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1730e8905dbea5ca3056d2002abe78755bdca22f3fbd66a11bb6c000b2289945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:19Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.070581 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50870207-38dd-40d0-8a53-0eaa3af9d1fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1c7d3d73b43c4c32aba4ba0704c399d72ff80eff878183b5791be243b17bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:19Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.084373 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97e83040-6e53-4c9c-afda-c21bee92d1b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5d85015202ca538e52ac5ea41e417dd6c76f81b7191007983ec9bf7fde68eb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292wp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:19Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.115616 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.115655 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.115667 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.115683 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.115692 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:19Z","lastTransitionTime":"2026-01-31T03:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.218754 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.218790 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.218801 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.218822 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.218835 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:19Z","lastTransitionTime":"2026-01-31T03:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.281074 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:49:19 crc kubenswrapper[4667]: E0131 03:49:19.281347 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.296632 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 06:36:23.925331541 +0000 UTC Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.321865 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.321897 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.321905 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.321921 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.321931 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:19Z","lastTransitionTime":"2026-01-31T03:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.424458 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.424494 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.424504 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.424517 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.424527 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:19Z","lastTransitionTime":"2026-01-31T03:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.527446 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.527488 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.527499 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.527516 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.527528 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:19Z","lastTransitionTime":"2026-01-31T03:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.629989 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.630032 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.630043 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.630060 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.630073 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:19Z","lastTransitionTime":"2026-01-31T03:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.732375 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.732411 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.732421 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.732435 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.732445 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:19Z","lastTransitionTime":"2026-01-31T03:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.810967 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jhj5n_3d685ba5-5ff5-4e74-8d02-99a233fc6c9b/ovnkube-controller/3.log" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.811601 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jhj5n_3d685ba5-5ff5-4e74-8d02-99a233fc6c9b/ovnkube-controller/2.log" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.813854 4667 generic.go:334] "Generic (PLEG): container finished" podID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerID="f8ea9d94faf102adf3e8e0c6c13fc20da919f3b287704731c53453ac9fa045f2" exitCode=1 Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.813898 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" event={"ID":"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b","Type":"ContainerDied","Data":"f8ea9d94faf102adf3e8e0c6c13fc20da919f3b287704731c53453ac9fa045f2"} Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.813937 4667 scope.go:117] "RemoveContainer" containerID="bd8eb04c461cb43803302aaa3b6a93643b780598fe63f798a8834d1c762040d3" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.814520 4667 scope.go:117] "RemoveContainer" containerID="f8ea9d94faf102adf3e8e0c6c13fc20da919f3b287704731c53453ac9fa045f2" Jan 31 03:49:19 crc kubenswrapper[4667]: E0131 03:49:19.814645 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jhj5n_openshift-ovn-kubernetes(3d685ba5-5ff5-4e74-8d02-99a233fc6c9b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.830811 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:19Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.834035 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.834083 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.834101 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.834367 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.834397 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:19Z","lastTransitionTime":"2026-01-31T03:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.844362 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7daf4c78db3e0b9f6629c1ae75a3dad90a19d8f830bc4e3db8b48c852b3485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:19Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.856368 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523e97dbbec93313d682bbe37cf3b8cf49936d91c8f60915bf1d8849bd53f4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1730e8905dbea5ca3056d2002abe78755bdca22f3fbd66a11bb6c000b2289945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:19Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.869781 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50870207-38dd-40d0-8a53-0eaa3af9d1fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1c7d3d73b43c4c32aba4ba0704c399d72ff80eff878183b5791be243b17bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:19Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.879400 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97e83040-6e53-4c9c-afda-c21bee92d1b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5d85015202ca538e52ac5ea41e417dd6c76f81b7191007983ec9bf7fde68eb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292wp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:19Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.890585 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10ccda3-d9b2-4d01-897a-8498aee530b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 03:48:15.785649 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 03:48:15.786510 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:48:15.790183 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1119535395/tls.crt::/tmp/serving-cert-1119535395/tls.key\\\\\\\"\\\\nI0131 03:48:16.086916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 03:48:16.089052 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 03:48:16.089068 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 03:48:16.089086 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 03:48:16.089091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 03:48:16.097787 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 03:48:16.097804 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 03:48:16.097818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 03:48:16.097822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 03:48:16.097825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 03:48:16.098030 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 03:48:16.100791 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:19Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.900801 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:19Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.910348 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ns977" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57dcb541-6b8f-4730-9fd8-7ce27870e3a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559ff674832b9bb990309a535c9afb11a4f629b263495bc86311c24730b1a8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccvwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ns977\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:19Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.924095 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd764" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b069c8d1-f785-4509-8ee6-7d44525bdc89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370b5296f121631f739cdba4f61f648a9f00aec73518549365ffd970bea8db8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3014a6072d180863fd8be274b221dc47c9cd792188b8bc80621db1892ffdf64a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:49:10Z\\\",\\\"message\\\":\\\"2026-01-31T03:48:25+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9bb64c19-df16-4367-ac81-2cae05fe0d99\\\\n2026-01-31T03:48:25+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9bb64c19-df16-4367-ac81-2cae05fe0d99 to /host/opt/cni/bin/\\\\n2026-01-31T03:48:25Z [verbose] multus-daemon started\\\\n2026-01-31T03:48:25Z [verbose] Readiness Indicator file check\\\\n2026-01-31T03:49:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n8wnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd764\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:19Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.935910 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n5jv7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a24385e-62ca-4a82-8995-9f20115931c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp2s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp2s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n5jv7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:19Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.936859 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.936880 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.936888 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.936901 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.936909 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:19Z","lastTransitionTime":"2026-01-31T03:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.945856 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:19Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.956718 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b103bbd2-fb5d-4b2a-8b01-c32f699757df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9ff867bc008c324ad624ff71dcbf4f93b48146483c828ce43d1c10de40b0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://298f76d02f4ede118feca9fc2d4c9c073e2331174dcf673208ed96478b74232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9b7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:19Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.968570 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa0d7266-def3-467f-8ea8-8bb9d7364385\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f2362cecfaa0886df1bf67ce2fe0bc1f9586a785228c776daa0062302ae5f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a145cfd5492e6e2c3168e54747f3699b5148950bf88dc0431699e0dc6ff4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec15f3fe2b9b1c6827bc9093c19c1fe8cba5dc2aa0db3289e0a0b7029b8b09c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901a09c39328d4cd2c2abdccd1928b5f1554d953b1271349cbdf179f93eaa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901a09c39328d4cd2c2abdccd1928b5f1554d953b1271349cbdf179f93eaa4be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:19Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.985189 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8ea9d94faf102adf3e8e0c6c13fc20da919f3b287704731c53453ac9fa045f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd8eb04c461cb43803302aaa3b6a93643b780598fe63f798a8834d1c762040d3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:48:52Z\\\",\\\"message\\\":\\\"ift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:mcc-proxy-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0073e06ab \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9001,TargetPort:{0 9001 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-config-controller,},ClusterIP:10.217.5.16,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.16],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0131 03:48:52.267064 6240 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8ea9d94faf102adf3e8e0c6c13fc20da919f3b287704731c53453ac9fa045f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:49:19Z\\\",\\\"message\\\":\\\", Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0131 03:49:19.129479 6640 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0131 03:49:19.129516 6640 lb_config.go:1031] Cluster endpoints for openshift-dns/dns-default for network=default are: map[]\\\\nF0131 03:49:19.129530 6640 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:19Z is after 2025-08-24T17:21:41Z]\\\\nI0131 03:49:19.129545 6640 services_controller.go:443] Built service openshift-dns/dns-default LB cluster-wide configs for network=default: []services.lbConfig(nil)\\\\nI0131 03:49:19.129537 6640 model_client\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:49:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhj5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:19Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:19 crc kubenswrapper[4667]: I0131 03:49:19.995308 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4q9qz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3920ffb2-08f3-440b-bc6c-319a57bbe195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27cd88c349d4786018ab6ae21d45b22cdb95054c0b188bdce8cf97c53c09c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlwd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e3a943070029bd6e98682f2a4b3cfc0ab26dc2e9e7ab5179a60316923dbad33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlwd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4q9qz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:19Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.012922 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:49:20 crc kubenswrapper[4667]: E0131 03:49:20.013176 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:24.013145315 +0000 UTC m=+147.529480634 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.016626 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f495ddf-247c-4cac-979b-710342a770f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8127777e243fb5e93d9dd430fb28ccc91a340dfd6b4169ebac2f3167e5ea1660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ce78e24e1cbf1115918bbd93da300b4efa5434f21bf1a11669f702a894f64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b94e5ba5276aa39d01479c1eb697edafb939d0e62ec593eed1628e7735e95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e068f8011041fbb83af5bf15d9f856fb111b3fd48d3707507df895249b125646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586bfc35d3a6f331a069b76d004135156f1b13db4afcf14f1404cba6c4ec3627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:20Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.028520 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af345b03-7933-405e-9918-4dfa4559aba8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572c8933d715b77d472cb5f4c1e3c78d3a5d9dd6857a061f4db5292274041429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93540db06524b42380aa14ebbb64ece6e98cf8104ccc5930d58ae980e41d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fad2057c1b38b9a7628137d033413b768ea2ff18e1ece27c3db4f9279009ad9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46df3e9a1466ef303cf6f7c703ee28b993ea1ad08bdc870c4298be0ba0804d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:20Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.036758 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ad8c36-abaf-4c43-a606-0ba3332c5923\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b210ad25dbcd4bf7b51c2f927b5ca85daf9baccfc9d52bbc588be0116b0f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74f97be14eb7d701db876925386940db52004c3cd69931268f857f10ce702c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f97be14eb7d701db876925386940db52004c3cd69931268f857f10ce702c39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:20Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.039159 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.039335 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.039352 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.039471 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.039489 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:20Z","lastTransitionTime":"2026-01-31T03:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.047176 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e8e5fbf5b62418d8b08ccaafaf9f565b19d0d1ab8dc1ad4151af14790cf4aa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:20Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.114343 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.114384 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.114405 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.114437 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:49:20 crc kubenswrapper[4667]: E0131 03:49:20.114542 4667 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 03:49:20 crc kubenswrapper[4667]: E0131 03:49:20.114580 4667 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 03:49:20 crc kubenswrapper[4667]: E0131 03:49:20.114595 4667 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:49:20 crc kubenswrapper[4667]: E0131 03:49:20.114618 4667 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 03:49:20 crc kubenswrapper[4667]: E0131 03:49:20.114556 4667 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 03:49:20 crc kubenswrapper[4667]: E0131 03:49:20.114658 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 03:50:24.114638364 +0000 UTC m=+147.630973673 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:49:20 crc kubenswrapper[4667]: E0131 03:49:20.114665 4667 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 03:49:20 crc kubenswrapper[4667]: E0131 03:49:20.114677 4667 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:49:20 crc kubenswrapper[4667]: E0131 03:49:20.114682 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 03:50:24.114671675 +0000 UTC m=+147.631006984 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 03:49:20 crc kubenswrapper[4667]: E0131 03:49:20.114596 4667 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 03:49:20 crc kubenswrapper[4667]: E0131 03:49:20.114702 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 03:50:24.114690215 +0000 UTC m=+147.631025514 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 03:49:20 crc kubenswrapper[4667]: E0131 03:49:20.114720 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 03:50:24.114713216 +0000 UTC m=+147.631048535 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.141969 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.141994 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.142004 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.142021 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.142032 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:20Z","lastTransitionTime":"2026-01-31T03:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.245072 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.245135 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.245152 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.245177 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.245194 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:20Z","lastTransitionTime":"2026-01-31T03:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.281527 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.281615 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:49:20 crc kubenswrapper[4667]: E0131 03:49:20.281736 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:49:20 crc kubenswrapper[4667]: E0131 03:49:20.281961 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.281527 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:49:20 crc kubenswrapper[4667]: E0131 03:49:20.282094 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.297617 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 23:33:31.544634347 +0000 UTC Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.347253 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.347286 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.347294 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.347307 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.347315 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:20Z","lastTransitionTime":"2026-01-31T03:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.450999 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.451055 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.451072 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.451094 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.451111 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:20Z","lastTransitionTime":"2026-01-31T03:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.554080 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.554125 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.554136 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.554155 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.554177 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:20Z","lastTransitionTime":"2026-01-31T03:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.657134 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.657165 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.657174 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.657187 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.657198 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:20Z","lastTransitionTime":"2026-01-31T03:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.760140 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.760220 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.760248 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.760283 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.760327 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:20Z","lastTransitionTime":"2026-01-31T03:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.819940 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jhj5n_3d685ba5-5ff5-4e74-8d02-99a233fc6c9b/ovnkube-controller/3.log" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.823554 4667 scope.go:117] "RemoveContainer" containerID="f8ea9d94faf102adf3e8e0c6c13fc20da919f3b287704731c53453ac9fa045f2" Jan 31 03:49:20 crc kubenswrapper[4667]: E0131 03:49:20.823693 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jhj5n_openshift-ovn-kubernetes(3d685ba5-5ff5-4e74-8d02-99a233fc6c9b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.845318 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:20Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.856908 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:20Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.863014 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.863046 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.863055 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.863071 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.863081 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:20Z","lastTransitionTime":"2026-01-31T03:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.867982 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a7daf4c78db3e0b9f6629c1ae75a3dad90a19d8f830bc4e3db8b48c852b3485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:20Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.879484 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://523e97dbbec93313d682bbe37cf3b8cf49936d91c8f60915bf1d8849bd53f4b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1730e8905dbea5ca3056d2002abe78755bdca22f3fbd66a11bb6c000b2289945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:20Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.892271 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50870207-38dd-40d0-8a53-0eaa3af9d1fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1c7d3d73b43c4c32aba4ba0704c399d72ff80eff878183b5791be243b17bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f49faa3fee92d4504f1774e75a363e00757d697ba34089241a43d874d6523048\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972c30fb3c6d2025c918499b6fb8936df08294e72845fe08b111d5b3e0141a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370964a6aa02c5a9bb2c41a7afc39630838371db97c3bb6a9405bf854dcac46c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d789b44b652eac4bee1300b3b8824ce33867c3098a44d969fc87cfe0dda95c6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f558b26409294c8af178dc290e74ed4d5d596fbba20b7e6d5ec263b16027e2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8702ccb50f8a0af13fafa1b5fe20badd8fe8f7e1b145effad586c49e8367006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tszz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:20Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.900773 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsr6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97e83040-6e53-4c9c-afda-c21bee92d1b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5d85015202ca538e52ac5ea41e417dd6c76f81b7191007983ec9bf7fde68eb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-292wp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsr6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:20Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.913609 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c10ccda3-d9b2-4d01-897a-8498aee530b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"le observer\\\\nW0131 03:48:15.785649 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0131 03:48:15.786510 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 03:48:15.790183 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1119535395/tls.crt::/tmp/serving-cert-1119535395/tls.key\\\\\\\"\\\\nI0131 03:48:16.086916 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 03:48:16.089052 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 03:48:16.089068 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 03:48:16.089086 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 03:48:16.089091 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 03:48:16.097787 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 03:48:16.097804 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097809 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 03:48:16.097815 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 03:48:16.097818 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0131 03:48:16.097822 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 03:48:16.097825 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0131 03:48:16.098030 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0131 03:48:16.100791 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:20Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.927226 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b103bbd2-fb5d-4b2a-8b01-c32f699757df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9ff867bc008c324ad624ff71dcbf4f93b48146483c828ce43d1c10de40b0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://298f76d02f4ede118feca9fc2d4c9c073e2331174dcf673208ed96478b74232d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zfwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9b7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:20Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.937908 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-ns977" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57dcb541-6b8f-4730-9fd8-7ce27870e3a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559ff674832b9bb990309a535c9afb11a4f629b263495bc86311c24730b1a8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ccvwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ns977\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:20Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.952223 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cd764" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b069c8d1-f785-4509-8ee6-7d44525bdc89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370b5296f121631f739cdba4f61f648a9f00aec73518549365ffd970bea8db8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3014a6072d180863fd8be274b221dc47c9cd792188b8bc80621db1892ffdf64a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:49:10Z\\\",\\\"message\\\":\\\"2026-01-31T03:48:25+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9bb64c19-df16-4367-ac81-2cae05fe0d99\\\\n2026-01-31T03:48:25+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9bb64c19-df16-4367-ac81-2cae05fe0d99 to /host/opt/cni/bin/\\\\n2026-01-31T03:48:25Z [verbose] multus-daemon started\\\\n2026-01-31T03:48:25Z [verbose] Readiness Indicator file check\\\\n2026-01-31T03:49:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:49:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n8wnt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cd764\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:20Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.962674 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n5jv7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a24385e-62ca-4a82-8995-9f20115931c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp2s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp2s8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n5jv7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:20Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.965611 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.965657 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.965668 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.965687 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.965699 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:20Z","lastTransitionTime":"2026-01-31T03:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.974513 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:20Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.986614 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af345b03-7933-405e-9918-4dfa4559aba8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572c8933d715b77d472cb5f4c1e3c78d3a5d9dd6857a061f4db5292274041429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a93540db06524b42380aa14ebbb64ece6e98cf8104ccc5930d58ae980e41d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fad2057c1b38b9a7628137d033413b768ea2ff18e1ece27c3db4f9279009ad9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a46df3e9a1466ef303cf6f7c703ee28b993ea1ad08bdc870c4298be0ba0804d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:20Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:20 crc kubenswrapper[4667]: I0131 03:49:20.996856 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa0d7266-def3-467f-8ea8-8bb9d7364385\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f2362cecfaa0886df1bf67ce2fe0bc1f9586a785228c776daa0062302ae5f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a145cfd5492e6e2c3168e54747f3699b5148950bf88dc0431699e0dc6ff4fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eec15f3fe2b9b1c6827bc9093c19c1fe8cba5dc2aa0db3289e0a0b7029b8b09c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://901a09c39328d4cd2c2abdccd1928b5f1554d953b1271349cbdf179f93eaa4be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901a09c39328d4cd2c2abdccd1928b5f1554d953b1271349cbdf179f93eaa4be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:20Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.018486 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8ea9d94faf102adf3e8e0c6c13fc20da919f3b287704731c53453ac9fa045f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8ea9d94faf102adf3e8e0c6c13fc20da919f3b287704731c53453ac9fa045f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T03:49:19Z\\\",\\\"message\\\":\\\", Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0131 03:49:19.129479 6640 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0131 03:49:19.129516 6640 lb_config.go:1031] Cluster endpoints for openshift-dns/dns-default for network=default are: map[]\\\\nF0131 03:49:19.129530 6640 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:19Z is after 2025-08-24T17:21:41Z]\\\\nI0131 03:49:19.129545 6640 services_controller.go:443] Built service openshift-dns/dns-default LB cluster-wide configs for network=default: []services.lbConfig(nil)\\\\nI0131 03:49:19.129537 6640 model_client\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T03:49:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jhj5n_openshift-ovn-kubernetes(3d685ba5-5ff5-4e74-8d02-99a233fc6c9b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmls5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jhj5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:21Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.028894 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4q9qz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3920ffb2-08f3-440b-bc6c-319a57bbe195\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e27cd88c349d4786018ab6ae21d45b22cdb95054c0b188bdce8cf97c53c09c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlwd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e3a943070029bd6e98682f2a4b3cfc0ab26dc2e9e7ab5179a60316923dbad33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlwd5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:48:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4q9qz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:21Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.047543 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f495ddf-247c-4cac-979b-710342a770f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8127777e243fb5e93d9dd430fb28ccc91a340dfd6b4169ebac2f3167e5ea1660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92ce78e24e1cbf1115918bbd93da300b4efa5434f21bf1a11669f702a894f64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8b94e5ba5276aa39d01479c1eb697edafb939d0e62ec593eed1628e7735e95d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e068f8011041fbb83af5bf15d9f856fb111b3fd48d3707507df895249b125646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://586bfc35d3a6f331a069b76d004135156f1b13db4afcf14f1404cba6c4ec3627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ea7afb9e2e1318e324b1edd62cc856509fe6ed000900136f1c430a13c84c3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d164f7404881706e87424c91da9b4363f7294f6a3d1ba60c5285190a22f2bfe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7c5ba3086ce95940abc707ac285e9c5ec5d9f7f7da39e9f83cbe89bffadd1c3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:48:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:21Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.062454 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e8e5fbf5b62418d8b08ccaafaf9f565b19d0d1ab8dc1ad4151af14790cf4aa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:48:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:21Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.067659 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.067682 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.067691 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.067703 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.067718 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:21Z","lastTransitionTime":"2026-01-31T03:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.072720 4667 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ad8c36-abaf-4c43-a606-0ba3332c5923\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:48:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T03:47:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35b210ad25dbcd4bf7b51c2f927b5ca85daf9baccfc9d52bbc588be0116b0f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T03:47:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74f97be14eb7d701db876925386940db52004c3cd69931268f857f10ce702c39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f97be14eb7d701db876925386940db52004c3cd69931268f857f10ce702c39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T03:47:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T03:47:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T03:47:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:21Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.170029 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.170060 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.170070 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.170087 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.170099 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:21Z","lastTransitionTime":"2026-01-31T03:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.272651 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.272710 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.272732 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.272762 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.272783 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:21Z","lastTransitionTime":"2026-01-31T03:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.281051 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:49:21 crc kubenswrapper[4667]: E0131 03:49:21.281265 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.298051 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 03:09:58.253405728 +0000 UTC Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.375937 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.376002 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.376023 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.376051 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.376071 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:21Z","lastTransitionTime":"2026-01-31T03:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.481362 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.481409 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.481420 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.481437 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.481454 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:21Z","lastTransitionTime":"2026-01-31T03:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.583818 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.583925 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.583938 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.583951 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.583963 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:21Z","lastTransitionTime":"2026-01-31T03:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.686643 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.686689 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.686698 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.686712 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.686724 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:21Z","lastTransitionTime":"2026-01-31T03:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.789307 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.789553 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.789618 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.789726 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.789785 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:21Z","lastTransitionTime":"2026-01-31T03:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.892525 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.893091 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.893161 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.893234 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.893313 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:21Z","lastTransitionTime":"2026-01-31T03:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.996335 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.996606 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.996667 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.996728 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:21 crc kubenswrapper[4667]: I0131 03:49:21.996790 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:21Z","lastTransitionTime":"2026-01-31T03:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.104975 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.105024 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.105034 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.105049 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.105060 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:22Z","lastTransitionTime":"2026-01-31T03:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.207760 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.208146 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.208278 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.208412 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.208554 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:22Z","lastTransitionTime":"2026-01-31T03:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.281497 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.281541 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:49:22 crc kubenswrapper[4667]: E0131 03:49:22.281615 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:49:22 crc kubenswrapper[4667]: E0131 03:49:22.281734 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.281503 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:49:22 crc kubenswrapper[4667]: E0131 03:49:22.281801 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.298560 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 14:27:49.095890497 +0000 UTC Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.311702 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.311736 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.311748 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.311797 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.311812 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:22Z","lastTransitionTime":"2026-01-31T03:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.414404 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.414620 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.414728 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.414917 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.414980 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:22Z","lastTransitionTime":"2026-01-31T03:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.517207 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.517237 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.517247 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.517263 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.517277 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:22Z","lastTransitionTime":"2026-01-31T03:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.619684 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.619738 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.619762 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.619785 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.619800 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:22Z","lastTransitionTime":"2026-01-31T03:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.722509 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.722551 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.722562 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.722580 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.722609 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:22Z","lastTransitionTime":"2026-01-31T03:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.824980 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.825017 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.825028 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.825044 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.825057 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:22Z","lastTransitionTime":"2026-01-31T03:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.926742 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.926814 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.926883 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.926910 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:22 crc kubenswrapper[4667]: I0131 03:49:22.926925 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:22Z","lastTransitionTime":"2026-01-31T03:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.029526 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.029564 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.029581 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.029600 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.029615 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:23Z","lastTransitionTime":"2026-01-31T03:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.131941 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.131977 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.131984 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.131996 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.132006 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:23Z","lastTransitionTime":"2026-01-31T03:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.235023 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.235073 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.235084 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.235100 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.235112 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:23Z","lastTransitionTime":"2026-01-31T03:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.281302 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:49:23 crc kubenswrapper[4667]: E0131 03:49:23.281457 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.299337 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 01:11:52.311758178 +0000 UTC Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.337774 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.337817 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.337830 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.337886 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.337903 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:23Z","lastTransitionTime":"2026-01-31T03:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.440515 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.440545 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.440553 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.440565 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.440573 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:23Z","lastTransitionTime":"2026-01-31T03:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.542670 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.542763 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.542774 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.542787 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.542797 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:23Z","lastTransitionTime":"2026-01-31T03:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.645614 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.645664 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.645674 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.645689 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.645701 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:23Z","lastTransitionTime":"2026-01-31T03:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.749095 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.749140 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.749151 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.749171 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.749184 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:23Z","lastTransitionTime":"2026-01-31T03:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.851618 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.851688 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.851710 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.851738 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.851756 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:23Z","lastTransitionTime":"2026-01-31T03:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.927728 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.927789 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.927801 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.927820 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.927831 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:23Z","lastTransitionTime":"2026-01-31T03:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:23 crc kubenswrapper[4667]: E0131 03:49:23.941941 4667 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b790e77-6566-44ce-a51f-ed9234cccb89\\\",\\\"systemUUID\\\":\\\"53d28e89-fb25-47fd-9db4-43074284604e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.946144 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.946192 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.946203 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.946221 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.946235 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:23Z","lastTransitionTime":"2026-01-31T03:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:23 crc kubenswrapper[4667]: E0131 03:49:23.966371 4667 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b790e77-6566-44ce-a51f-ed9234cccb89\\\",\\\"systemUUID\\\":\\\"53d28e89-fb25-47fd-9db4-43074284604e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.970750 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.970788 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.970801 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.970820 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.970833 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:23Z","lastTransitionTime":"2026-01-31T03:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:23 crc kubenswrapper[4667]: E0131 03:49:23.986273 4667 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b790e77-6566-44ce-a51f-ed9234cccb89\\\",\\\"systemUUID\\\":\\\"53d28e89-fb25-47fd-9db4-43074284604e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:23Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.991164 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.991196 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.991208 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.991224 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:23 crc kubenswrapper[4667]: I0131 03:49:23.991238 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:23Z","lastTransitionTime":"2026-01-31T03:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:24 crc kubenswrapper[4667]: E0131 03:49:24.005985 4667 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b790e77-6566-44ce-a51f-ed9234cccb89\\\",\\\"systemUUID\\\":\\\"53d28e89-fb25-47fd-9db4-43074284604e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.009893 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.010004 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.010028 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.010113 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.010136 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:24Z","lastTransitionTime":"2026-01-31T03:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:24 crc kubenswrapper[4667]: E0131 03:49:24.027788 4667 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:49:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T03:49:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b790e77-6566-44ce-a51f-ed9234cccb89\\\",\\\"systemUUID\\\":\\\"53d28e89-fb25-47fd-9db4-43074284604e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T03:49:24Z is after 2025-08-24T17:21:41Z" Jan 31 03:49:24 crc kubenswrapper[4667]: E0131 03:49:24.028182 4667 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.029962 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.030001 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.030010 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.030026 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.030038 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:24Z","lastTransitionTime":"2026-01-31T03:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.132794 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.132835 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.132859 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.132873 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.132883 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:24Z","lastTransitionTime":"2026-01-31T03:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.235154 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.235250 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.235289 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.235310 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.235323 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:24Z","lastTransitionTime":"2026-01-31T03:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.281318 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.281341 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.281372 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:49:24 crc kubenswrapper[4667]: E0131 03:49:24.281431 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:49:24 crc kubenswrapper[4667]: E0131 03:49:24.281604 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:49:24 crc kubenswrapper[4667]: E0131 03:49:24.281810 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.299774 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 02:33:24.838504229 +0000 UTC Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.338283 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.338321 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.338331 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.338345 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.338356 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:24Z","lastTransitionTime":"2026-01-31T03:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.440955 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.441006 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.441021 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.441043 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.441058 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:24Z","lastTransitionTime":"2026-01-31T03:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.543506 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.543556 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.543567 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.543584 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.543597 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:24Z","lastTransitionTime":"2026-01-31T03:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.646120 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.646158 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.646167 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.646180 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.646193 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:24Z","lastTransitionTime":"2026-01-31T03:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.749074 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.749108 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.749116 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.749129 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.749139 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:24Z","lastTransitionTime":"2026-01-31T03:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.851775 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.851804 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.851813 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.851826 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.851834 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:24Z","lastTransitionTime":"2026-01-31T03:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.953797 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.953833 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.953858 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.953872 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:24 crc kubenswrapper[4667]: I0131 03:49:24.953881 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:24Z","lastTransitionTime":"2026-01-31T03:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.056486 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.056529 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.056544 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.056565 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.056580 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:25Z","lastTransitionTime":"2026-01-31T03:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.159662 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.159710 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.159718 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.159732 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.159743 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:25Z","lastTransitionTime":"2026-01-31T03:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.261935 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.261970 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.261979 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.261994 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.262004 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:25Z","lastTransitionTime":"2026-01-31T03:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.282651 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:49:25 crc kubenswrapper[4667]: E0131 03:49:25.282822 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.300608 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 19:51:24.516954846 +0000 UTC Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.364046 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.364100 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.364112 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.364125 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.364135 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:25Z","lastTransitionTime":"2026-01-31T03:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.466576 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.466615 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.466623 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.466640 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.466649 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:25Z","lastTransitionTime":"2026-01-31T03:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.569108 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.569163 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.569172 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.569186 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.569196 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:25Z","lastTransitionTime":"2026-01-31T03:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.672640 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.672704 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.672722 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.672745 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.672761 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:25Z","lastTransitionTime":"2026-01-31T03:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.775095 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.775137 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.775152 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.775167 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.775205 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:25Z","lastTransitionTime":"2026-01-31T03:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.877009 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.877053 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.877065 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.877078 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.877092 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:25Z","lastTransitionTime":"2026-01-31T03:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.980748 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.980800 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.980811 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.980828 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:25 crc kubenswrapper[4667]: I0131 03:49:25.980858 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:25Z","lastTransitionTime":"2026-01-31T03:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.084133 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.084199 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.084221 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.084251 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.084291 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:26Z","lastTransitionTime":"2026-01-31T03:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.186755 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.186788 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.186796 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.186809 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.186818 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:26Z","lastTransitionTime":"2026-01-31T03:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.280986 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.281081 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:49:26 crc kubenswrapper[4667]: E0131 03:49:26.281112 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.281001 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:49:26 crc kubenswrapper[4667]: E0131 03:49:26.281198 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:49:26 crc kubenswrapper[4667]: E0131 03:49:26.281295 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.288921 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.289035 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.289132 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.289209 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.289278 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:26Z","lastTransitionTime":"2026-01-31T03:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.301534 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 21:28:02.144760476 +0000 UTC Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.391649 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.391687 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.391700 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.391715 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.391726 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:26Z","lastTransitionTime":"2026-01-31T03:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.493919 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.494154 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.494226 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.494291 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.494367 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:26Z","lastTransitionTime":"2026-01-31T03:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.596898 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.597690 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.597743 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.597781 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.597808 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:26Z","lastTransitionTime":"2026-01-31T03:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.700384 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.700587 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.700596 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.700609 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.700617 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:26Z","lastTransitionTime":"2026-01-31T03:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.802787 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.802879 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.802893 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.802911 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.802922 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:26Z","lastTransitionTime":"2026-01-31T03:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.908762 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.908828 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.908884 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.908917 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:26 crc kubenswrapper[4667]: I0131 03:49:26.908939 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:26Z","lastTransitionTime":"2026-01-31T03:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.011143 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.011221 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.011232 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.011250 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.011263 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:27Z","lastTransitionTime":"2026-01-31T03:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.112771 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.112804 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.112812 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.112825 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.112860 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:27Z","lastTransitionTime":"2026-01-31T03:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.214996 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.215030 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.215041 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.215253 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.215276 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:27Z","lastTransitionTime":"2026-01-31T03:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.280723 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:49:27 crc kubenswrapper[4667]: E0131 03:49:27.280871 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.301630 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 11:24:03.763200308 +0000 UTC Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.317694 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.317748 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.317768 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.317823 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.317877 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:27Z","lastTransitionTime":"2026-01-31T03:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.323070 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podStartSLOduration=65.323050981 podStartE2EDuration="1m5.323050981s" podCreationTimestamp="2026-01-31 03:48:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:27.322967659 +0000 UTC m=+90.839302988" watchObservedRunningTime="2026-01-31 03:49:27.323050981 +0000 UTC m=+90.839386280" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.336331 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-ns977" podStartSLOduration=65.336311849 podStartE2EDuration="1m5.336311849s" podCreationTimestamp="2026-01-31 03:48:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:27.336052733 +0000 UTC m=+90.852388072" watchObservedRunningTime="2026-01-31 03:49:27.336311849 +0000 UTC m=+90.852647148" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.407033 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-cd764" podStartSLOduration=64.407011453 podStartE2EDuration="1m4.407011453s" podCreationTimestamp="2026-01-31 03:48:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:27.407000733 +0000 UTC m=+90.923336032" watchObservedRunningTime="2026-01-31 03:49:27.407011453 +0000 UTC m=+90.923346752" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.419956 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.419996 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.420006 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.420024 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.420036 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:27Z","lastTransitionTime":"2026-01-31T03:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.447952 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=72.447934297 podStartE2EDuration="1m12.447934297s" podCreationTimestamp="2026-01-31 03:48:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:27.447461825 +0000 UTC m=+90.963797144" watchObservedRunningTime="2026-01-31 03:49:27.447934297 +0000 UTC m=+90.964269606" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.471883 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=65.471861618 podStartE2EDuration="1m5.471861618s" podCreationTimestamp="2026-01-31 03:48:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:27.461424151 +0000 UTC m=+90.977759450" watchObservedRunningTime="2026-01-31 03:49:27.471861618 +0000 UTC m=+90.988196917" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.472030 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=42.472025072 podStartE2EDuration="42.472025072s" podCreationTimestamp="2026-01-31 03:48:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:27.471500068 +0000 UTC m=+90.987835357" watchObservedRunningTime="2026-01-31 03:49:27.472025072 +0000 UTC m=+90.988360371" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.505712 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4q9qz" podStartSLOduration=64.505694531 podStartE2EDuration="1m4.505694531s" podCreationTimestamp="2026-01-31 03:48:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:27.504736846 +0000 UTC m=+91.021072145" watchObservedRunningTime="2026-01-31 03:49:27.505694531 +0000 UTC m=+91.022029840" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.522157 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.522189 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.522198 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.522209 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.522219 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:27Z","lastTransitionTime":"2026-01-31T03:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.531416 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=25.531363066 podStartE2EDuration="25.531363066s" podCreationTimestamp="2026-01-31 03:49:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:27.518488367 +0000 UTC m=+91.034823686" watchObservedRunningTime="2026-01-31 03:49:27.531363066 +0000 UTC m=+91.047698375" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.549105 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=71.549087138 podStartE2EDuration="1m11.549087138s" podCreationTimestamp="2026-01-31 03:48:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:27.548881972 +0000 UTC m=+91.065217271" watchObservedRunningTime="2026-01-31 03:49:27.549087138 +0000 UTC m=+91.065422437" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.618761 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-zgr94" podStartSLOduration=64.618739325 podStartE2EDuration="1m4.618739325s" podCreationTimestamp="2026-01-31 03:48:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:27.618326324 +0000 UTC m=+91.134661623" watchObservedRunningTime="2026-01-31 03:49:27.618739325 +0000 UTC m=+91.135074624" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.623699 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.623746 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.623754 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.623780 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.623790 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:27Z","lastTransitionTime":"2026-01-31T03:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.629085 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-2zsr6" podStartSLOduration=65.629066778 podStartE2EDuration="1m5.629066778s" podCreationTimestamp="2026-01-31 03:48:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:27.628165405 +0000 UTC m=+91.144500704" watchObservedRunningTime="2026-01-31 03:49:27.629066778 +0000 UTC m=+91.145402067" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.726104 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.726152 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.726163 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.726177 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.726185 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:27Z","lastTransitionTime":"2026-01-31T03:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.828919 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.828975 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.828993 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.829017 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.829038 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:27Z","lastTransitionTime":"2026-01-31T03:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.970168 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.970208 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.970219 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.970232 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:27 crc kubenswrapper[4667]: I0131 03:49:27.970244 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:27Z","lastTransitionTime":"2026-01-31T03:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.072043 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.072074 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.072082 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.072095 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.072105 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:28Z","lastTransitionTime":"2026-01-31T03:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.174887 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.174936 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.174950 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.174971 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.174990 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:28Z","lastTransitionTime":"2026-01-31T03:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.277300 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.277364 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.277386 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.277415 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.277437 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:28Z","lastTransitionTime":"2026-01-31T03:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.281576 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.281684 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:49:28 crc kubenswrapper[4667]: E0131 03:49:28.281716 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.281600 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:49:28 crc kubenswrapper[4667]: E0131 03:49:28.281827 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:49:28 crc kubenswrapper[4667]: E0131 03:49:28.281908 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.301728 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 19:14:57.681924251 +0000 UTC Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.380487 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.380535 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.380549 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.380567 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.380582 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:28Z","lastTransitionTime":"2026-01-31T03:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.483253 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.483291 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.483299 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.483313 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.483322 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:28Z","lastTransitionTime":"2026-01-31T03:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.586223 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.586251 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.586260 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.586271 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.586279 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:28Z","lastTransitionTime":"2026-01-31T03:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.689204 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.689319 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.689343 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.689367 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.689383 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:28Z","lastTransitionTime":"2026-01-31T03:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.792277 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.792332 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.792347 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.792373 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.792388 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:28Z","lastTransitionTime":"2026-01-31T03:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.895402 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.895447 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.895463 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.895485 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.895501 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:28Z","lastTransitionTime":"2026-01-31T03:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.998240 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.998300 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.998310 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.998331 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:28 crc kubenswrapper[4667]: I0131 03:49:28.998345 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:28Z","lastTransitionTime":"2026-01-31T03:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.101305 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.102016 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.102082 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.102119 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.102145 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:29Z","lastTransitionTime":"2026-01-31T03:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.205089 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.205134 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.205146 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.205161 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.205171 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:29Z","lastTransitionTime":"2026-01-31T03:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.280804 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:49:29 crc kubenswrapper[4667]: E0131 03:49:29.280961 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.302319 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 22:59:43.466941907 +0000 UTC Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.306996 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.307023 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.307031 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.307043 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.307051 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:29Z","lastTransitionTime":"2026-01-31T03:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.409346 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.409395 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.409407 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.409422 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.409434 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:29Z","lastTransitionTime":"2026-01-31T03:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.512488 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.512529 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.512540 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.512557 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.512569 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:29Z","lastTransitionTime":"2026-01-31T03:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.614578 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.614629 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.614639 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.614652 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.614661 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:29Z","lastTransitionTime":"2026-01-31T03:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.716777 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.716897 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.716910 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.716937 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.716950 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:29Z","lastTransitionTime":"2026-01-31T03:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.822205 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.822267 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.822276 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.822291 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.822301 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:29Z","lastTransitionTime":"2026-01-31T03:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.928143 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.928194 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.928205 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.928231 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:29 crc kubenswrapper[4667]: I0131 03:49:29.928247 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:29Z","lastTransitionTime":"2026-01-31T03:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.030322 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.030354 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.030362 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.030375 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.030385 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:30Z","lastTransitionTime":"2026-01-31T03:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.132883 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.132943 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.132957 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.132975 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.132990 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:30Z","lastTransitionTime":"2026-01-31T03:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.235772 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.236070 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.236190 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.236288 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.236386 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:30Z","lastTransitionTime":"2026-01-31T03:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.281058 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.281145 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.281060 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:49:30 crc kubenswrapper[4667]: E0131 03:49:30.281200 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:49:30 crc kubenswrapper[4667]: E0131 03:49:30.281293 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:49:30 crc kubenswrapper[4667]: E0131 03:49:30.281349 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.303284 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 22:11:10.536948766 +0000 UTC Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.338140 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.338175 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.338183 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.338197 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.338206 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:30Z","lastTransitionTime":"2026-01-31T03:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.440351 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.440389 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.440399 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.440413 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.440424 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:30Z","lastTransitionTime":"2026-01-31T03:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.542766 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.542807 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.542817 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.542832 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.542894 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:30Z","lastTransitionTime":"2026-01-31T03:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.644857 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.644892 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.644903 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.644919 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.644930 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:30Z","lastTransitionTime":"2026-01-31T03:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.746964 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.747002 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.747014 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.747029 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.747041 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:30Z","lastTransitionTime":"2026-01-31T03:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.849875 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.849908 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.849916 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.849929 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.849938 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:30Z","lastTransitionTime":"2026-01-31T03:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.952574 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.952610 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.952619 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.952634 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:30 crc kubenswrapper[4667]: I0131 03:49:30.952643 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:30Z","lastTransitionTime":"2026-01-31T03:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.054447 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.054502 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.054512 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.054528 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.054539 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:31Z","lastTransitionTime":"2026-01-31T03:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.157410 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.157448 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.157457 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.157471 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.157480 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:31Z","lastTransitionTime":"2026-01-31T03:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.260109 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.260179 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.260197 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.260222 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.260239 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:31Z","lastTransitionTime":"2026-01-31T03:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.281518 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:49:31 crc kubenswrapper[4667]: E0131 03:49:31.281760 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.303773 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 09:32:54.645506734 +0000 UTC Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.363212 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.363254 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.363271 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.363292 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.363307 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:31Z","lastTransitionTime":"2026-01-31T03:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.466153 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.466260 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.466290 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.466327 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.466354 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:31Z","lastTransitionTime":"2026-01-31T03:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.568970 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.569005 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.569014 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.569027 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.569037 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:31Z","lastTransitionTime":"2026-01-31T03:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.670899 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.670946 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.670957 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.670974 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.670983 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:31Z","lastTransitionTime":"2026-01-31T03:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.773976 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.774052 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.774074 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.774103 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.774123 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:31Z","lastTransitionTime":"2026-01-31T03:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.877146 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.877230 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.877250 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.877274 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.877291 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:31Z","lastTransitionTime":"2026-01-31T03:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.979986 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.980019 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.980028 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.980040 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:31 crc kubenswrapper[4667]: I0131 03:49:31.980050 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:31Z","lastTransitionTime":"2026-01-31T03:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.082901 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.082942 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.082954 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.083014 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.083035 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:32Z","lastTransitionTime":"2026-01-31T03:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.186907 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.186958 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.186975 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.186999 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.187017 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:32Z","lastTransitionTime":"2026-01-31T03:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.281255 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:49:32 crc kubenswrapper[4667]: E0131 03:49:32.281404 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.281272 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.281508 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:49:32 crc kubenswrapper[4667]: E0131 03:49:32.281708 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:49:32 crc kubenswrapper[4667]: E0131 03:49:32.282136 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.282439 4667 scope.go:117] "RemoveContainer" containerID="f8ea9d94faf102adf3e8e0c6c13fc20da919f3b287704731c53453ac9fa045f2" Jan 31 03:49:32 crc kubenswrapper[4667]: E0131 03:49:32.282618 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jhj5n_openshift-ovn-kubernetes(3d685ba5-5ff5-4e74-8d02-99a233fc6c9b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.289532 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.289557 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.289565 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.289578 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.289588 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:32Z","lastTransitionTime":"2026-01-31T03:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.304894 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 14:46:36.911919998 +0000 UTC Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.392791 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.392879 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.392897 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.392923 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.392946 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:32Z","lastTransitionTime":"2026-01-31T03:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.496129 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.496208 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.496219 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.496233 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.496445 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:32Z","lastTransitionTime":"2026-01-31T03:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.599166 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.599225 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.599242 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.599269 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.599286 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:32Z","lastTransitionTime":"2026-01-31T03:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.702739 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.702805 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.702827 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.702889 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.702912 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:32Z","lastTransitionTime":"2026-01-31T03:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.806358 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.806493 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.806524 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.806548 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.806567 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:32Z","lastTransitionTime":"2026-01-31T03:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.909317 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.909363 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.909382 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.909405 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:32 crc kubenswrapper[4667]: I0131 03:49:32.909423 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:32Z","lastTransitionTime":"2026-01-31T03:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.012292 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.012361 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.012379 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.012408 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.012433 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:33Z","lastTransitionTime":"2026-01-31T03:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.116039 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.116082 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.116093 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.116110 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.116124 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:33Z","lastTransitionTime":"2026-01-31T03:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.220032 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.220075 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.220084 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.220117 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.220128 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:33Z","lastTransitionTime":"2026-01-31T03:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.282000 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:49:33 crc kubenswrapper[4667]: E0131 03:49:33.283646 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.305031 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 17:19:57.330177166 +0000 UTC Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.322120 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.322175 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.322188 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.322205 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.322216 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:33Z","lastTransitionTime":"2026-01-31T03:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.424660 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.424717 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.424727 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.424740 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.424751 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:33Z","lastTransitionTime":"2026-01-31T03:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.527251 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.527298 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.527308 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.527321 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.527331 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:33Z","lastTransitionTime":"2026-01-31T03:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.629335 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.629367 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.629375 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.629389 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.629399 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:33Z","lastTransitionTime":"2026-01-31T03:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.731966 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.732013 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.732027 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.732049 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.732064 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:33Z","lastTransitionTime":"2026-01-31T03:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.834122 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.834155 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.834165 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.834194 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.834204 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:33Z","lastTransitionTime":"2026-01-31T03:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.936574 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.936607 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.936616 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.936630 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:33 crc kubenswrapper[4667]: I0131 03:49:33.936639 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:33Z","lastTransitionTime":"2026-01-31T03:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.039379 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.039417 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.039427 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.039442 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.039451 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:34Z","lastTransitionTime":"2026-01-31T03:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.142215 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.142281 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.142300 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.142322 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.142340 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:34Z","lastTransitionTime":"2026-01-31T03:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.245661 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.246251 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.246356 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.246444 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.246532 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:34Z","lastTransitionTime":"2026-01-31T03:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.281006 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:49:34 crc kubenswrapper[4667]: E0131 03:49:34.281119 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.281517 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.281631 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:49:34 crc kubenswrapper[4667]: E0131 03:49:34.281828 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:49:34 crc kubenswrapper[4667]: E0131 03:49:34.282093 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.306037 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 12:56:50.5613253 +0000 UTC Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.349342 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.349399 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.349419 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.349445 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.349468 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:34Z","lastTransitionTime":"2026-01-31T03:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.381669 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.381728 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.381751 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.381778 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.381802 4667 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T03:49:34Z","lastTransitionTime":"2026-01-31T03:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.458122 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrxn8"] Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.458484 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrxn8" Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.460071 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.462005 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.462689 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.466470 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.641980 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/85b4bb1a-8571-4357-93ab-2daf2949fa62-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wrxn8\" (UID: \"85b4bb1a-8571-4357-93ab-2daf2949fa62\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrxn8" Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.642090 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/85b4bb1a-8571-4357-93ab-2daf2949fa62-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wrxn8\" (UID: \"85b4bb1a-8571-4357-93ab-2daf2949fa62\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrxn8" Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.642135 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85b4bb1a-8571-4357-93ab-2daf2949fa62-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wrxn8\" (UID: \"85b4bb1a-8571-4357-93ab-2daf2949fa62\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrxn8" Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.642222 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/85b4bb1a-8571-4357-93ab-2daf2949fa62-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wrxn8\" (UID: \"85b4bb1a-8571-4357-93ab-2daf2949fa62\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrxn8" Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.642265 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85b4bb1a-8571-4357-93ab-2daf2949fa62-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wrxn8\" (UID: \"85b4bb1a-8571-4357-93ab-2daf2949fa62\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrxn8" Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.742686 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/85b4bb1a-8571-4357-93ab-2daf2949fa62-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wrxn8\" (UID: \"85b4bb1a-8571-4357-93ab-2daf2949fa62\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrxn8" Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.742758 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/85b4bb1a-8571-4357-93ab-2daf2949fa62-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wrxn8\" (UID: \"85b4bb1a-8571-4357-93ab-2daf2949fa62\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrxn8" Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.742787 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85b4bb1a-8571-4357-93ab-2daf2949fa62-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wrxn8\" (UID: \"85b4bb1a-8571-4357-93ab-2daf2949fa62\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrxn8" Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.742827 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/85b4bb1a-8571-4357-93ab-2daf2949fa62-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wrxn8\" (UID: \"85b4bb1a-8571-4357-93ab-2daf2949fa62\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrxn8" Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.742894 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85b4bb1a-8571-4357-93ab-2daf2949fa62-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wrxn8\" (UID: \"85b4bb1a-8571-4357-93ab-2daf2949fa62\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrxn8" Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.744296 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/85b4bb1a-8571-4357-93ab-2daf2949fa62-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wrxn8\" (UID: \"85b4bb1a-8571-4357-93ab-2daf2949fa62\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrxn8" Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.744379 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/85b4bb1a-8571-4357-93ab-2daf2949fa62-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wrxn8\" (UID: \"85b4bb1a-8571-4357-93ab-2daf2949fa62\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrxn8" Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.745295 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/85b4bb1a-8571-4357-93ab-2daf2949fa62-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wrxn8\" (UID: \"85b4bb1a-8571-4357-93ab-2daf2949fa62\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrxn8" Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.751321 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85b4bb1a-8571-4357-93ab-2daf2949fa62-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wrxn8\" (UID: \"85b4bb1a-8571-4357-93ab-2daf2949fa62\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrxn8" Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.767649 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85b4bb1a-8571-4357-93ab-2daf2949fa62-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wrxn8\" (UID: \"85b4bb1a-8571-4357-93ab-2daf2949fa62\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrxn8" Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.771309 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrxn8" Jan 31 03:49:34 crc kubenswrapper[4667]: I0131 03:49:34.871142 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrxn8" event={"ID":"85b4bb1a-8571-4357-93ab-2daf2949fa62","Type":"ContainerStarted","Data":"6b44377f3a164a872742d5694a3110571f5c9815f130bb8b36c1958bd8f8ce8f"} Jan 31 03:49:35 crc kubenswrapper[4667]: I0131 03:49:35.281569 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:49:35 crc kubenswrapper[4667]: E0131 03:49:35.281695 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:49:35 crc kubenswrapper[4667]: I0131 03:49:35.307280 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 17:19:08.777571648 +0000 UTC Jan 31 03:49:35 crc kubenswrapper[4667]: I0131 03:49:35.307374 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 31 03:49:35 crc kubenswrapper[4667]: I0131 03:49:35.317017 4667 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 31 03:49:35 crc kubenswrapper[4667]: I0131 03:49:35.875612 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrxn8" event={"ID":"85b4bb1a-8571-4357-93ab-2daf2949fa62","Type":"ContainerStarted","Data":"6c5ba4c0914814d1136f7d71531a66f9f22c695b062e9853cda1818d97a6e9c6"} Jan 31 03:49:35 crc kubenswrapper[4667]: I0131 03:49:35.891095 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrxn8" podStartSLOduration=73.891080845 podStartE2EDuration="1m13.891080845s" podCreationTimestamp="2026-01-31 03:48:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:49:35.890426369 +0000 UTC m=+99.406761668" watchObservedRunningTime="2026-01-31 03:49:35.891080845 +0000 UTC m=+99.407416144" Jan 31 03:49:36 crc kubenswrapper[4667]: I0131 03:49:36.281566 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:49:36 crc kubenswrapper[4667]: E0131 03:49:36.282282 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:49:36 crc kubenswrapper[4667]: I0131 03:49:36.281754 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:49:36 crc kubenswrapper[4667]: E0131 03:49:36.282565 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:49:36 crc kubenswrapper[4667]: I0131 03:49:36.281677 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:49:36 crc kubenswrapper[4667]: E0131 03:49:36.283056 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:49:37 crc kubenswrapper[4667]: I0131 03:49:37.281291 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:49:37 crc kubenswrapper[4667]: E0131 03:49:37.281547 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:49:38 crc kubenswrapper[4667]: I0131 03:49:38.280948 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:49:38 crc kubenswrapper[4667]: I0131 03:49:38.281072 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:49:38 crc kubenswrapper[4667]: I0131 03:49:38.281624 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:49:38 crc kubenswrapper[4667]: E0131 03:49:38.281881 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:49:38 crc kubenswrapper[4667]: E0131 03:49:38.281945 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:49:38 crc kubenswrapper[4667]: E0131 03:49:38.282010 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:49:39 crc kubenswrapper[4667]: I0131 03:49:39.281480 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:49:39 crc kubenswrapper[4667]: E0131 03:49:39.281714 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:49:40 crc kubenswrapper[4667]: I0131 03:49:40.281148 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:49:40 crc kubenswrapper[4667]: I0131 03:49:40.281202 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:49:40 crc kubenswrapper[4667]: E0131 03:49:40.281294 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:49:40 crc kubenswrapper[4667]: I0131 03:49:40.281311 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:49:40 crc kubenswrapper[4667]: E0131 03:49:40.281364 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:49:40 crc kubenswrapper[4667]: E0131 03:49:40.281426 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:49:41 crc kubenswrapper[4667]: I0131 03:49:41.281548 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:49:41 crc kubenswrapper[4667]: E0131 03:49:41.281674 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:49:41 crc kubenswrapper[4667]: I0131 03:49:41.657761 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a24385e-62ca-4a82-8995-9f20115931c4-metrics-certs\") pod \"network-metrics-daemon-n5jv7\" (UID: \"4a24385e-62ca-4a82-8995-9f20115931c4\") " pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:49:41 crc kubenswrapper[4667]: E0131 03:49:41.657952 4667 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 03:49:41 crc kubenswrapper[4667]: E0131 03:49:41.658011 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a24385e-62ca-4a82-8995-9f20115931c4-metrics-certs podName:4a24385e-62ca-4a82-8995-9f20115931c4 nodeName:}" failed. No retries permitted until 2026-01-31 03:50:45.657994865 +0000 UTC m=+169.174330164 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4a24385e-62ca-4a82-8995-9f20115931c4-metrics-certs") pod "network-metrics-daemon-n5jv7" (UID: "4a24385e-62ca-4a82-8995-9f20115931c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 03:49:42 crc kubenswrapper[4667]: I0131 03:49:42.281500 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:49:42 crc kubenswrapper[4667]: I0131 03:49:42.281564 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:49:42 crc kubenswrapper[4667]: E0131 03:49:42.281616 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:49:42 crc kubenswrapper[4667]: E0131 03:49:42.281723 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:49:42 crc kubenswrapper[4667]: I0131 03:49:42.282220 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:49:42 crc kubenswrapper[4667]: E0131 03:49:42.282408 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:49:43 crc kubenswrapper[4667]: I0131 03:49:43.281683 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:49:43 crc kubenswrapper[4667]: E0131 03:49:43.281888 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:49:44 crc kubenswrapper[4667]: I0131 03:49:44.281238 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:49:44 crc kubenswrapper[4667]: I0131 03:49:44.281304 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:49:44 crc kubenswrapper[4667]: E0131 03:49:44.281370 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:49:44 crc kubenswrapper[4667]: E0131 03:49:44.281475 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:49:44 crc kubenswrapper[4667]: I0131 03:49:44.281526 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:49:44 crc kubenswrapper[4667]: E0131 03:49:44.282341 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:49:44 crc kubenswrapper[4667]: I0131 03:49:44.282676 4667 scope.go:117] "RemoveContainer" containerID="f8ea9d94faf102adf3e8e0c6c13fc20da919f3b287704731c53453ac9fa045f2" Jan 31 03:49:44 crc kubenswrapper[4667]: E0131 03:49:44.283217 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jhj5n_openshift-ovn-kubernetes(3d685ba5-5ff5-4e74-8d02-99a233fc6c9b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" Jan 31 03:49:45 crc kubenswrapper[4667]: I0131 03:49:45.282155 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:49:45 crc kubenswrapper[4667]: E0131 03:49:45.282333 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:49:46 crc kubenswrapper[4667]: I0131 03:49:46.281750 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:49:46 crc kubenswrapper[4667]: I0131 03:49:46.281996 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:49:46 crc kubenswrapper[4667]: I0131 03:49:46.281784 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:49:46 crc kubenswrapper[4667]: E0131 03:49:46.282122 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:49:46 crc kubenswrapper[4667]: E0131 03:49:46.282257 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:49:46 crc kubenswrapper[4667]: E0131 03:49:46.282540 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:49:47 crc kubenswrapper[4667]: I0131 03:49:47.281151 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:49:47 crc kubenswrapper[4667]: E0131 03:49:47.282132 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:49:48 crc kubenswrapper[4667]: I0131 03:49:48.281669 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:49:48 crc kubenswrapper[4667]: E0131 03:49:48.282122 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:49:48 crc kubenswrapper[4667]: I0131 03:49:48.281767 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:49:48 crc kubenswrapper[4667]: I0131 03:49:48.281820 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:49:48 crc kubenswrapper[4667]: E0131 03:49:48.282276 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:49:48 crc kubenswrapper[4667]: E0131 03:49:48.282658 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:49:49 crc kubenswrapper[4667]: I0131 03:49:49.280930 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:49:49 crc kubenswrapper[4667]: E0131 03:49:49.281088 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:49:50 crc kubenswrapper[4667]: I0131 03:49:50.281110 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:49:50 crc kubenswrapper[4667]: I0131 03:49:50.281184 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:49:50 crc kubenswrapper[4667]: I0131 03:49:50.281241 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:49:50 crc kubenswrapper[4667]: E0131 03:49:50.281317 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:49:50 crc kubenswrapper[4667]: E0131 03:49:50.281536 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:49:50 crc kubenswrapper[4667]: E0131 03:49:50.281630 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:49:51 crc kubenswrapper[4667]: I0131 03:49:51.280979 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:49:51 crc kubenswrapper[4667]: E0131 03:49:51.281455 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:49:52 crc kubenswrapper[4667]: I0131 03:49:52.280656 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:49:52 crc kubenswrapper[4667]: I0131 03:49:52.280672 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:49:52 crc kubenswrapper[4667]: E0131 03:49:52.280786 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:49:52 crc kubenswrapper[4667]: E0131 03:49:52.281142 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:49:52 crc kubenswrapper[4667]: I0131 03:49:52.281171 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:49:52 crc kubenswrapper[4667]: E0131 03:49:52.281278 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:49:53 crc kubenswrapper[4667]: I0131 03:49:53.281365 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:49:53 crc kubenswrapper[4667]: E0131 03:49:53.281618 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:49:54 crc kubenswrapper[4667]: I0131 03:49:54.280970 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:49:54 crc kubenswrapper[4667]: E0131 03:49:54.281122 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:49:54 crc kubenswrapper[4667]: I0131 03:49:54.281371 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:49:54 crc kubenswrapper[4667]: E0131 03:49:54.281501 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:49:54 crc kubenswrapper[4667]: I0131 03:49:54.281701 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:49:54 crc kubenswrapper[4667]: E0131 03:49:54.281787 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:49:55 crc kubenswrapper[4667]: I0131 03:49:55.280790 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:49:55 crc kubenswrapper[4667]: E0131 03:49:55.281158 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:49:56 crc kubenswrapper[4667]: I0131 03:49:56.281169 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:49:56 crc kubenswrapper[4667]: I0131 03:49:56.281211 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:49:56 crc kubenswrapper[4667]: I0131 03:49:56.281288 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:49:56 crc kubenswrapper[4667]: E0131 03:49:56.281402 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:49:56 crc kubenswrapper[4667]: E0131 03:49:56.281515 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:49:56 crc kubenswrapper[4667]: E0131 03:49:56.281694 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:49:56 crc kubenswrapper[4667]: I0131 03:49:56.953283 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cd764_b069c8d1-f785-4509-8ee6-7d44525bdc89/kube-multus/1.log" Jan 31 03:49:56 crc kubenswrapper[4667]: I0131 03:49:56.954151 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cd764_b069c8d1-f785-4509-8ee6-7d44525bdc89/kube-multus/0.log" Jan 31 03:49:56 crc kubenswrapper[4667]: I0131 03:49:56.954231 4667 generic.go:334] "Generic (PLEG): container finished" podID="b069c8d1-f785-4509-8ee6-7d44525bdc89" containerID="370b5296f121631f739cdba4f61f648a9f00aec73518549365ffd970bea8db8d" exitCode=1 Jan 31 03:49:56 crc kubenswrapper[4667]: I0131 03:49:56.954422 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cd764" event={"ID":"b069c8d1-f785-4509-8ee6-7d44525bdc89","Type":"ContainerDied","Data":"370b5296f121631f739cdba4f61f648a9f00aec73518549365ffd970bea8db8d"} Jan 31 03:49:56 crc kubenswrapper[4667]: I0131 03:49:56.954504 4667 scope.go:117] "RemoveContainer" containerID="3014a6072d180863fd8be274b221dc47c9cd792188b8bc80621db1892ffdf64a" Jan 31 03:49:56 crc kubenswrapper[4667]: I0131 03:49:56.955263 4667 scope.go:117] "RemoveContainer" containerID="370b5296f121631f739cdba4f61f648a9f00aec73518549365ffd970bea8db8d" Jan 31 03:49:56 crc kubenswrapper[4667]: E0131 03:49:56.955932 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-cd764_openshift-multus(b069c8d1-f785-4509-8ee6-7d44525bdc89)\"" pod="openshift-multus/multus-cd764" podUID="b069c8d1-f785-4509-8ee6-7d44525bdc89" Jan 31 03:49:57 crc kubenswrapper[4667]: E0131 03:49:57.238866 4667 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 31 03:49:57 crc kubenswrapper[4667]: I0131 03:49:57.281170 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:49:57 crc kubenswrapper[4667]: E0131 03:49:57.281971 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:49:57 crc kubenswrapper[4667]: I0131 03:49:57.287511 4667 scope.go:117] "RemoveContainer" containerID="f8ea9d94faf102adf3e8e0c6c13fc20da919f3b287704731c53453ac9fa045f2" Jan 31 03:49:57 crc kubenswrapper[4667]: E0131 03:49:57.287963 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jhj5n_openshift-ovn-kubernetes(3d685ba5-5ff5-4e74-8d02-99a233fc6c9b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" Jan 31 03:49:57 crc kubenswrapper[4667]: E0131 03:49:57.395321 4667 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 03:49:57 crc kubenswrapper[4667]: I0131 03:49:57.958812 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cd764_b069c8d1-f785-4509-8ee6-7d44525bdc89/kube-multus/1.log" Jan 31 03:49:58 crc kubenswrapper[4667]: I0131 03:49:58.281228 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:49:58 crc kubenswrapper[4667]: E0131 03:49:58.281748 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:49:58 crc kubenswrapper[4667]: I0131 03:49:58.282012 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:49:58 crc kubenswrapper[4667]: I0131 03:49:58.282101 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:49:58 crc kubenswrapper[4667]: E0131 03:49:58.282143 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:49:58 crc kubenswrapper[4667]: E0131 03:49:58.282219 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:49:59 crc kubenswrapper[4667]: I0131 03:49:59.281216 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:49:59 crc kubenswrapper[4667]: E0131 03:49:59.281445 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:50:00 crc kubenswrapper[4667]: I0131 03:50:00.281321 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:50:00 crc kubenswrapper[4667]: I0131 03:50:00.281420 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:50:00 crc kubenswrapper[4667]: I0131 03:50:00.281506 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:50:00 crc kubenswrapper[4667]: E0131 03:50:00.281509 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:50:00 crc kubenswrapper[4667]: E0131 03:50:00.281644 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:50:00 crc kubenswrapper[4667]: E0131 03:50:00.281954 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:50:01 crc kubenswrapper[4667]: I0131 03:50:01.281515 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:50:01 crc kubenswrapper[4667]: E0131 03:50:01.282775 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:50:02 crc kubenswrapper[4667]: I0131 03:50:02.280988 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:50:02 crc kubenswrapper[4667]: I0131 03:50:02.281055 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:50:02 crc kubenswrapper[4667]: E0131 03:50:02.281111 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:50:02 crc kubenswrapper[4667]: E0131 03:50:02.281233 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:50:02 crc kubenswrapper[4667]: I0131 03:50:02.280998 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:50:02 crc kubenswrapper[4667]: E0131 03:50:02.281359 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:50:02 crc kubenswrapper[4667]: E0131 03:50:02.397140 4667 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 03:50:03 crc kubenswrapper[4667]: I0131 03:50:03.282078 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:50:03 crc kubenswrapper[4667]: E0131 03:50:03.282239 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:50:04 crc kubenswrapper[4667]: I0131 03:50:04.280999 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:50:04 crc kubenswrapper[4667]: I0131 03:50:04.281085 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:50:04 crc kubenswrapper[4667]: E0131 03:50:04.281130 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:50:04 crc kubenswrapper[4667]: I0131 03:50:04.281325 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:50:04 crc kubenswrapper[4667]: E0131 03:50:04.281335 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:50:04 crc kubenswrapper[4667]: E0131 03:50:04.281396 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:50:05 crc kubenswrapper[4667]: I0131 03:50:05.281948 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:50:05 crc kubenswrapper[4667]: E0131 03:50:05.283124 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:50:06 crc kubenswrapper[4667]: I0131 03:50:06.281590 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:50:06 crc kubenswrapper[4667]: I0131 03:50:06.281627 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:50:06 crc kubenswrapper[4667]: I0131 03:50:06.281689 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:50:06 crc kubenswrapper[4667]: E0131 03:50:06.281774 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:50:06 crc kubenswrapper[4667]: E0131 03:50:06.282020 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:50:06 crc kubenswrapper[4667]: E0131 03:50:06.282109 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:50:07 crc kubenswrapper[4667]: I0131 03:50:07.281174 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:50:07 crc kubenswrapper[4667]: E0131 03:50:07.283282 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:50:07 crc kubenswrapper[4667]: I0131 03:50:07.283925 4667 scope.go:117] "RemoveContainer" containerID="370b5296f121631f739cdba4f61f648a9f00aec73518549365ffd970bea8db8d" Jan 31 03:50:07 crc kubenswrapper[4667]: E0131 03:50:07.397701 4667 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 03:50:07 crc kubenswrapper[4667]: I0131 03:50:07.995668 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cd764_b069c8d1-f785-4509-8ee6-7d44525bdc89/kube-multus/1.log" Jan 31 03:50:07 crc kubenswrapper[4667]: I0131 03:50:07.995727 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cd764" event={"ID":"b069c8d1-f785-4509-8ee6-7d44525bdc89","Type":"ContainerStarted","Data":"9984a610f48d7ddbc022492b34bc1a1bd85aab975477a59f5f05018d5841f13a"} Jan 31 03:50:08 crc kubenswrapper[4667]: I0131 03:50:08.281534 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:50:08 crc kubenswrapper[4667]: I0131 03:50:08.281617 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:50:08 crc kubenswrapper[4667]: I0131 03:50:08.282262 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:50:08 crc kubenswrapper[4667]: E0131 03:50:08.282350 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:50:08 crc kubenswrapper[4667]: E0131 03:50:08.282435 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:50:08 crc kubenswrapper[4667]: E0131 03:50:08.282929 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:50:09 crc kubenswrapper[4667]: I0131 03:50:09.281263 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:50:09 crc kubenswrapper[4667]: E0131 03:50:09.282426 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:50:10 crc kubenswrapper[4667]: I0131 03:50:10.281445 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:50:10 crc kubenswrapper[4667]: I0131 03:50:10.281499 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:50:10 crc kubenswrapper[4667]: E0131 03:50:10.281589 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:50:10 crc kubenswrapper[4667]: I0131 03:50:10.281637 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:50:10 crc kubenswrapper[4667]: E0131 03:50:10.281666 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:50:10 crc kubenswrapper[4667]: E0131 03:50:10.281818 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:50:11 crc kubenswrapper[4667]: I0131 03:50:11.281366 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:50:11 crc kubenswrapper[4667]: E0131 03:50:11.281586 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:50:11 crc kubenswrapper[4667]: I0131 03:50:11.282808 4667 scope.go:117] "RemoveContainer" containerID="f8ea9d94faf102adf3e8e0c6c13fc20da919f3b287704731c53453ac9fa045f2" Jan 31 03:50:12 crc kubenswrapper[4667]: I0131 03:50:12.011253 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jhj5n_3d685ba5-5ff5-4e74-8d02-99a233fc6c9b/ovnkube-controller/3.log" Jan 31 03:50:12 crc kubenswrapper[4667]: I0131 03:50:12.014323 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" event={"ID":"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b","Type":"ContainerStarted","Data":"506293b7d928fe97c6b77d9109ec52621924d8d435d257363cf0fbd2e4b95a1b"} Jan 31 03:50:12 crc kubenswrapper[4667]: I0131 03:50:12.014787 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:50:12 crc kubenswrapper[4667]: I0131 03:50:12.046949 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" podStartSLOduration=109.046930515 podStartE2EDuration="1m49.046930515s" podCreationTimestamp="2026-01-31 03:48:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:12.044881152 +0000 UTC m=+135.561216481" watchObservedRunningTime="2026-01-31 03:50:12.046930515 +0000 UTC m=+135.563265814" Jan 31 03:50:12 crc kubenswrapper[4667]: I0131 03:50:12.281312 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:50:12 crc kubenswrapper[4667]: I0131 03:50:12.281321 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:50:12 crc kubenswrapper[4667]: E0131 03:50:12.281454 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:50:12 crc kubenswrapper[4667]: I0131 03:50:12.281342 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:50:12 crc kubenswrapper[4667]: E0131 03:50:12.281523 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:50:12 crc kubenswrapper[4667]: E0131 03:50:12.281576 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:50:12 crc kubenswrapper[4667]: I0131 03:50:12.308235 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-n5jv7"] Jan 31 03:50:12 crc kubenswrapper[4667]: I0131 03:50:12.308349 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:50:12 crc kubenswrapper[4667]: E0131 03:50:12.308460 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:50:12 crc kubenswrapper[4667]: E0131 03:50:12.399044 4667 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 03:50:14 crc kubenswrapper[4667]: I0131 03:50:14.281587 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:50:14 crc kubenswrapper[4667]: I0131 03:50:14.281644 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:50:14 crc kubenswrapper[4667]: I0131 03:50:14.281663 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:50:14 crc kubenswrapper[4667]: E0131 03:50:14.281727 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:50:14 crc kubenswrapper[4667]: I0131 03:50:14.281831 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:50:14 crc kubenswrapper[4667]: E0131 03:50:14.281830 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:50:14 crc kubenswrapper[4667]: E0131 03:50:14.281926 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:50:14 crc kubenswrapper[4667]: E0131 03:50:14.282081 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:50:16 crc kubenswrapper[4667]: I0131 03:50:16.281135 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:50:16 crc kubenswrapper[4667]: I0131 03:50:16.281167 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:50:16 crc kubenswrapper[4667]: E0131 03:50:16.281294 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 03:50:16 crc kubenswrapper[4667]: I0131 03:50:16.281475 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:50:16 crc kubenswrapper[4667]: E0131 03:50:16.281541 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5jv7" podUID="4a24385e-62ca-4a82-8995-9f20115931c4" Jan 31 03:50:16 crc kubenswrapper[4667]: I0131 03:50:16.281659 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:50:16 crc kubenswrapper[4667]: E0131 03:50:16.281715 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 03:50:16 crc kubenswrapper[4667]: E0131 03:50:16.281891 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 03:50:18 crc kubenswrapper[4667]: I0131 03:50:18.281261 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:50:18 crc kubenswrapper[4667]: I0131 03:50:18.281318 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:50:18 crc kubenswrapper[4667]: I0131 03:50:18.281261 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:50:18 crc kubenswrapper[4667]: I0131 03:50:18.281277 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:50:18 crc kubenswrapper[4667]: I0131 03:50:18.283685 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 31 03:50:18 crc kubenswrapper[4667]: I0131 03:50:18.283990 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 31 03:50:18 crc kubenswrapper[4667]: I0131 03:50:18.284052 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 31 03:50:18 crc kubenswrapper[4667]: I0131 03:50:18.283990 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 31 03:50:18 crc kubenswrapper[4667]: I0131 03:50:18.284375 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 31 03:50:18 crc kubenswrapper[4667]: I0131 03:50:18.284400 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 31 03:50:24 crc kubenswrapper[4667]: I0131 03:50:24.063711 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:24 crc kubenswrapper[4667]: E0131 03:50:24.063883 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:52:26.063860011 +0000 UTC m=+269.580195320 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:24 crc kubenswrapper[4667]: I0131 03:50:24.165166 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:50:24 crc kubenswrapper[4667]: I0131 03:50:24.165235 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:50:24 crc kubenswrapper[4667]: I0131 03:50:24.165263 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:50:24 crc kubenswrapper[4667]: I0131 03:50:24.165287 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:50:24 crc kubenswrapper[4667]: I0131 03:50:24.166464 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:50:24 crc kubenswrapper[4667]: I0131 03:50:24.173645 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:50:24 crc kubenswrapper[4667]: I0131 03:50:24.174649 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:50:24 crc kubenswrapper[4667]: I0131 03:50:24.174699 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:50:24 crc kubenswrapper[4667]: I0131 03:50:24.299148 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 03:50:24 crc kubenswrapper[4667]: I0131 03:50:24.313948 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:50:24 crc kubenswrapper[4667]: I0131 03:50:24.323744 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 03:50:24 crc kubenswrapper[4667]: W0131 03:50:24.586076 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-268c8a11d85ba9a826b5488f84aad0e8eda66785760b46b51680ac50c8cb80ce WatchSource:0}: Error finding container 268c8a11d85ba9a826b5488f84aad0e8eda66785760b46b51680ac50c8cb80ce: Status 404 returned error can't find the container with id 268c8a11d85ba9a826b5488f84aad0e8eda66785760b46b51680ac50c8cb80ce Jan 31 03:50:24 crc kubenswrapper[4667]: W0131 03:50:24.820025 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-0134d7f879602368a4dd7362789dfb168871e69c9f792a60df471172831cbe20 WatchSource:0}: Error finding container 0134d7f879602368a4dd7362789dfb168871e69c9f792a60df471172831cbe20: Status 404 returned error can't find the container with id 0134d7f879602368a4dd7362789dfb168871e69c9f792a60df471172831cbe20 Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.069804 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"51c604c9a7cf5046cb242f71cd5de4a304224d73512c328ce3424edccc6a4d1d"} Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.070296 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"0134d7f879602368a4dd7362789dfb168871e69c9f792a60df471172831cbe20"} Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.071396 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"ce63f25046b00ea4ede7deac2d1493bf569ce7b1248f550b80e596c142c52158"} Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.071481 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"268c8a11d85ba9a826b5488f84aad0e8eda66785760b46b51680ac50c8cb80ce"} Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.073075 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"dc1de3e3dd086a6652dc05e97fa17642a3cd8882ccba92d1d8ccdf6343a5bc1d"} Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.073251 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"4fc801d2ce2c60837c580c5b5ce4b211d8aa026557d305af01497e553545c68a"} Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.074454 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.693375 4667 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.731823 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-5zj2q"] Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.732284 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-5zj2q" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.735248 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzjpv"] Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.735629 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzjpv" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.736283 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-xhcb5"] Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.736568 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-xhcb5" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.737351 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mt595"] Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.737918 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mt595" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.738355 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-ms8lf"] Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.738826 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ms8lf" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.739373 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-7txvq"] Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.740053 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.740467 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-4rfkg"] Jan 31 03:50:25 crc kubenswrapper[4667]: W0131 03:50:25.741210 4667 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv": failed to list *v1.Secret: secrets "openshift-apiserver-operator-dockercfg-xtcjv" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Jan 31 03:50:25 crc kubenswrapper[4667]: E0131 03:50:25.741258 4667 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-xtcjv\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-apiserver-operator-dockercfg-xtcjv\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.741218 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4rfkg" Jan 31 03:50:25 crc kubenswrapper[4667]: W0131 03:50:25.749386 4667 reflector.go:561] object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx": failed to list *v1.Secret: secrets "cluster-image-registry-operator-dockercfg-m4qtx" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-image-registry": no relationship found between node 'crc' and this object Jan 31 03:50:25 crc kubenswrapper[4667]: E0131 03:50:25.749448 4667 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-m4qtx\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cluster-image-registry-operator-dockercfg-m4qtx\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-image-registry\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 03:50:25 crc kubenswrapper[4667]: W0131 03:50:25.749528 4667 reflector.go:561] object-"openshift-image-registry"/"trusted-ca": failed to list *v1.ConfigMap: configmaps "trusted-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-image-registry": no relationship found between node 'crc' and this object Jan 31 03:50:25 crc kubenswrapper[4667]: E0131 03:50:25.749548 4667 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-image-registry\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 03:50:25 crc kubenswrapper[4667]: W0131 03:50:25.749594 4667 reflector.go:561] object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff": failed to list *v1.Secret: secrets "openshift-apiserver-sa-dockercfg-djjff" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 31 03:50:25 crc kubenswrapper[4667]: E0131 03:50:25.749611 4667 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-djjff\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-apiserver-sa-dockercfg-djjff\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 03:50:25 crc kubenswrapper[4667]: W0131 03:50:25.749684 4667 reflector.go:561] object-"openshift-image-registry"/"image-registry-operator-tls": failed to list *v1.Secret: secrets "image-registry-operator-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-image-registry": no relationship found between node 'crc' and this object Jan 31 03:50:25 crc kubenswrapper[4667]: E0131 03:50:25.749699 4667 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"image-registry-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"image-registry-operator-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-image-registry\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 03:50:25 crc kubenswrapper[4667]: W0131 03:50:25.749731 4667 reflector.go:561] object-"openshift-apiserver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 31 03:50:25 crc kubenswrapper[4667]: E0131 03:50:25.749744 4667 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 03:50:25 crc kubenswrapper[4667]: W0131 03:50:25.749786 4667 reflector.go:561] object-"openshift-authentication-operator"/"trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "trusted-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 31 03:50:25 crc kubenswrapper[4667]: E0131 03:50:25.749799 4667 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 03:50:25 crc kubenswrapper[4667]: W0131 03:50:25.749927 4667 reflector.go:561] object-"openshift-authentication-operator"/"service-ca-bundle": failed to list *v1.ConfigMap: configmaps "service-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 31 03:50:25 crc kubenswrapper[4667]: E0131 03:50:25.749946 4667 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"service-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"service-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.750472 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 31 03:50:25 crc kubenswrapper[4667]: W0131 03:50:25.750966 4667 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config": failed to list *v1.ConfigMap: configmaps "openshift-apiserver-operator-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Jan 31 03:50:25 crc kubenswrapper[4667]: E0131 03:50:25.750995 4667 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-apiserver-operator-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 03:50:25 crc kubenswrapper[4667]: W0131 03:50:25.751096 4667 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Jan 31 03:50:25 crc kubenswrapper[4667]: E0131 03:50:25.751116 4667 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 03:50:25 crc kubenswrapper[4667]: W0131 03:50:25.751179 4667 reflector.go:561] object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj": failed to list *v1.Secret: secrets "authentication-operator-dockercfg-mz9bj" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 31 03:50:25 crc kubenswrapper[4667]: E0131 03:50:25.751194 4667 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-mz9bj\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"authentication-operator-dockercfg-mz9bj\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 03:50:25 crc kubenswrapper[4667]: W0131 03:50:25.751213 4667 reflector.go:561] object-"openshift-authentication-operator"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.751244 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 31 03:50:25 crc kubenswrapper[4667]: E0131 03:50:25.751247 4667 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 03:50:25 crc kubenswrapper[4667]: W0131 03:50:25.751292 4667 reflector.go:561] object-"openshift-apiserver-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Jan 31 03:50:25 crc kubenswrapper[4667]: E0131 03:50:25.751338 4667 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.751371 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 31 03:50:25 crc kubenswrapper[4667]: W0131 03:50:25.751410 4667 reflector.go:561] object-"openshift-config-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-config-operator": no relationship found between node 'crc' and this object Jan 31 03:50:25 crc kubenswrapper[4667]: W0131 03:50:25.751413 4667 reflector.go:561] object-"openshift-authentication-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 31 03:50:25 crc kubenswrapper[4667]: E0131 03:50:25.751427 4667 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 03:50:25 crc kubenswrapper[4667]: E0131 03:50:25.751436 4667 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.751510 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.751516 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.751535 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 31 03:50:25 crc kubenswrapper[4667]: W0131 03:50:25.751511 4667 reflector.go:561] object-"openshift-apiserver"/"etcd-client": failed to list *v1.Secret: secrets "etcd-client" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 31 03:50:25 crc kubenswrapper[4667]: E0131 03:50:25.751623 4667 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"etcd-client\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.751668 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 31 03:50:25 crc kubenswrapper[4667]: W0131 03:50:25.751906 4667 reflector.go:561] object-"openshift-apiserver"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 31 03:50:25 crc kubenswrapper[4667]: E0131 03:50:25.751928 4667 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 03:50:25 crc kubenswrapper[4667]: W0131 03:50:25.751973 4667 reflector.go:561] object-"openshift-apiserver"/"etcd-serving-ca": failed to list *v1.ConfigMap: configmaps "etcd-serving-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 31 03:50:25 crc kubenswrapper[4667]: E0131 03:50:25.751987 4667 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-serving-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"etcd-serving-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.752042 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 31 03:50:25 crc kubenswrapper[4667]: W0131 03:50:25.752068 4667 reflector.go:561] object-"openshift-apiserver"/"trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "trusted-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 31 03:50:25 crc kubenswrapper[4667]: E0131 03:50:25.752088 4667 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.752228 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 31 03:50:25 crc kubenswrapper[4667]: W0131 03:50:25.752330 4667 reflector.go:561] object-"openshift-authentication-operator"/"authentication-operator-config": failed to list *v1.ConfigMap: configmaps "authentication-operator-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 31 03:50:25 crc kubenswrapper[4667]: E0131 03:50:25.752360 4667 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"authentication-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"authentication-operator-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.752425 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 31 03:50:25 crc kubenswrapper[4667]: W0131 03:50:25.752531 4667 reflector.go:561] object-"openshift-apiserver"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 31 03:50:25 crc kubenswrapper[4667]: E0131 03:50:25.752562 4667 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.753142 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.753454 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.753654 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.753848 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.754029 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.754243 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.754316 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5nl2"] Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.754738 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gf8vs"] Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.755103 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-gf8vs" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.755505 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5nl2" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.756166 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q74f5"] Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.756479 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pfnkq"] Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.756958 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pfnkq" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.756965 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q74f5" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.758983 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-zpjcj"] Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.759617 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-zpjcj" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.768440 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.768549 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-jkdkh"] Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.769175 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jkdkh" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.773818 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dmtcm"] Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.774734 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.781085 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.783562 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cbf59768-adfb-48f6-b68b-ebf1675f1807-node-pullsecrets\") pod \"apiserver-76f77b778f-7txvq\" (UID: \"cbf59768-adfb-48f6-b68b-ebf1675f1807\") " pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.783592 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baefc4bd-d927-4cf9-94af-eab8b042b3ca-config\") pod \"controller-manager-879f6c89f-gf8vs\" (UID: \"baefc4bd-d927-4cf9-94af-eab8b042b3ca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gf8vs" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.783607 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4dd56584-ddc5-48e9-be73-9758dca8dddf-etcd-client\") pod \"apiserver-7bbb656c7d-4rfkg\" (UID: \"4dd56584-ddc5-48e9-be73-9758dca8dddf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4rfkg" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.783626 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9b119a43-b446-4226-9490-a7ba5baf2815-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rzjpv\" (UID: \"9b119a43-b446-4226-9490-a7ba5baf2815\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzjpv" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.783642 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cbf59768-adfb-48f6-b68b-ebf1675f1807-audit\") pod \"apiserver-76f77b778f-7txvq\" (UID: \"cbf59768-adfb-48f6-b68b-ebf1675f1807\") " pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.783656 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mvxg\" (UniqueName: \"kubernetes.io/projected/9b119a43-b446-4226-9490-a7ba5baf2815-kube-api-access-6mvxg\") pod \"cluster-image-registry-operator-dc59b4c8b-rzjpv\" (UID: \"9b119a43-b446-4226-9490-a7ba5baf2815\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzjpv" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.783671 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbf59768-adfb-48f6-b68b-ebf1675f1807-config\") pod \"apiserver-76f77b778f-7txvq\" (UID: \"cbf59768-adfb-48f6-b68b-ebf1675f1807\") " pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.783696 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x58lz\" (UniqueName: \"kubernetes.io/projected/a540fba9-faa8-4cfb-b907-4e7099429e30-kube-api-access-x58lz\") pod \"machine-approver-56656f9798-jkdkh\" (UID: \"a540fba9-faa8-4cfb-b907-4e7099429e30\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jkdkh" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.783713 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4dd56584-ddc5-48e9-be73-9758dca8dddf-serving-cert\") pod \"apiserver-7bbb656c7d-4rfkg\" (UID: \"4dd56584-ddc5-48e9-be73-9758dca8dddf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4rfkg" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.783726 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/402d584b-6176-4cee-8e27-cc233b48feec-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-mt595\" (UID: \"402d584b-6176-4cee-8e27-cc233b48feec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mt595" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.783744 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74c3828b-92ba-4a4a-bfeb-d5d02facdbdb-config\") pod \"route-controller-manager-6576b87f9c-m5nl2\" (UID: \"74c3828b-92ba-4a4a-bfeb-d5d02facdbdb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5nl2" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.783757 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cbf59768-adfb-48f6-b68b-ebf1675f1807-etcd-client\") pod \"apiserver-76f77b778f-7txvq\" (UID: \"cbf59768-adfb-48f6-b68b-ebf1675f1807\") " pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.783771 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/baefc4bd-d927-4cf9-94af-eab8b042b3ca-serving-cert\") pod \"controller-manager-879f6c89f-gf8vs\" (UID: \"baefc4bd-d927-4cf9-94af-eab8b042b3ca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gf8vs" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.783786 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5phmx\" (UniqueName: \"kubernetes.io/projected/9af91113-a315-4416-a1f2-6566c16278cf-kube-api-access-5phmx\") pod \"openshift-config-operator-7777fb866f-ms8lf\" (UID: \"9af91113-a315-4416-a1f2-6566c16278cf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ms8lf" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.783801 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc88x\" (UniqueName: \"kubernetes.io/projected/cbf59768-adfb-48f6-b68b-ebf1675f1807-kube-api-access-xc88x\") pod \"apiserver-76f77b778f-7txvq\" (UID: \"cbf59768-adfb-48f6-b68b-ebf1675f1807\") " pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.783816 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhp6x\" (UniqueName: \"kubernetes.io/projected/1ddc8dd4-60ac-4d28-8cab-1139c300a29c-kube-api-access-jhp6x\") pod \"cluster-samples-operator-665b6dd947-pfnkq\" (UID: \"1ddc8dd4-60ac-4d28-8cab-1139c300a29c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pfnkq" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.783831 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4dd56584-ddc5-48e9-be73-9758dca8dddf-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-4rfkg\" (UID: \"4dd56584-ddc5-48e9-be73-9758dca8dddf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4rfkg" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.783850 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbvlq\" (UniqueName: \"kubernetes.io/projected/3a49a8a9-82b6-4374-a43a-224f2f9e14a4-kube-api-access-rbvlq\") pod \"authentication-operator-69f744f599-xhcb5\" (UID: \"3a49a8a9-82b6-4374-a43a-224f2f9e14a4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xhcb5" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.783877 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/baefc4bd-d927-4cf9-94af-eab8b042b3ca-client-ca\") pod \"controller-manager-879f6c89f-gf8vs\" (UID: \"baefc4bd-d927-4cf9-94af-eab8b042b3ca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gf8vs" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.783895 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jh8g\" (UniqueName: \"kubernetes.io/projected/83d090b3-311a-4b89-aa7d-de1ca0b237d6-kube-api-access-9jh8g\") pod \"machine-api-operator-5694c8668f-zpjcj\" (UID: \"83d090b3-311a-4b89-aa7d-de1ca0b237d6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zpjcj" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.783910 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs6x9\" (UniqueName: \"kubernetes.io/projected/74c3828b-92ba-4a4a-bfeb-d5d02facdbdb-kube-api-access-bs6x9\") pod \"route-controller-manager-6576b87f9c-m5nl2\" (UID: \"74c3828b-92ba-4a4a-bfeb-d5d02facdbdb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5nl2" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.783928 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a49a8a9-82b6-4374-a43a-224f2f9e14a4-service-ca-bundle\") pod \"authentication-operator-69f744f599-xhcb5\" (UID: \"3a49a8a9-82b6-4374-a43a-224f2f9e14a4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xhcb5" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.783942 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a49a8a9-82b6-4374-a43a-224f2f9e14a4-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-xhcb5\" (UID: \"3a49a8a9-82b6-4374-a43a-224f2f9e14a4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xhcb5" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.783960 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9af91113-a315-4416-a1f2-6566c16278cf-serving-cert\") pod \"openshift-config-operator-7777fb866f-ms8lf\" (UID: \"9af91113-a315-4416-a1f2-6566c16278cf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ms8lf" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.783974 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74c3828b-92ba-4a4a-bfeb-d5d02facdbdb-client-ca\") pod \"route-controller-manager-6576b87f9c-m5nl2\" (UID: \"74c3828b-92ba-4a4a-bfeb-d5d02facdbdb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5nl2" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.783988 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/402d584b-6176-4cee-8e27-cc233b48feec-config\") pod \"openshift-apiserver-operator-796bbdcf4f-mt595\" (UID: \"402d584b-6176-4cee-8e27-cc233b48feec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mt595" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.784003 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ksbp\" (UniqueName: \"kubernetes.io/projected/402d584b-6176-4cee-8e27-cc233b48feec-kube-api-access-4ksbp\") pod \"openshift-apiserver-operator-796bbdcf4f-mt595\" (UID: \"402d584b-6176-4cee-8e27-cc233b48feec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mt595" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.784054 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/83d090b3-311a-4b89-aa7d-de1ca0b237d6-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-zpjcj\" (UID: \"83d090b3-311a-4b89-aa7d-de1ca0b237d6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zpjcj" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.784130 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm8fw\" (UniqueName: \"kubernetes.io/projected/4dd56584-ddc5-48e9-be73-9758dca8dddf-kube-api-access-wm8fw\") pod \"apiserver-7bbb656c7d-4rfkg\" (UID: \"4dd56584-ddc5-48e9-be73-9758dca8dddf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4rfkg" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.784164 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbf59768-adfb-48f6-b68b-ebf1675f1807-serving-cert\") pod \"apiserver-76f77b778f-7txvq\" (UID: \"cbf59768-adfb-48f6-b68b-ebf1675f1807\") " pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.784168 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.784183 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrqkz\" (UniqueName: \"kubernetes.io/projected/baefc4bd-d927-4cf9-94af-eab8b042b3ca-kube-api-access-rrqkz\") pod \"controller-manager-879f6c89f-gf8vs\" (UID: \"baefc4bd-d927-4cf9-94af-eab8b042b3ca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gf8vs" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.784207 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a540fba9-faa8-4cfb-b907-4e7099429e30-machine-approver-tls\") pod \"machine-approver-56656f9798-jkdkh\" (UID: \"a540fba9-faa8-4cfb-b907-4e7099429e30\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jkdkh" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.784229 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cbf59768-adfb-48f6-b68b-ebf1675f1807-etcd-serving-ca\") pod \"apiserver-76f77b778f-7txvq\" (UID: \"cbf59768-adfb-48f6-b68b-ebf1675f1807\") " pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.784266 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9af91113-a315-4416-a1f2-6566c16278cf-available-featuregates\") pod \"openshift-config-operator-7777fb866f-ms8lf\" (UID: \"9af91113-a315-4416-a1f2-6566c16278cf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ms8lf" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.784286 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9b119a43-b446-4226-9490-a7ba5baf2815-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rzjpv\" (UID: \"9b119a43-b446-4226-9490-a7ba5baf2815\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzjpv" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.784319 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbf59768-adfb-48f6-b68b-ebf1675f1807-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7txvq\" (UID: \"cbf59768-adfb-48f6-b68b-ebf1675f1807\") " pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.784343 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a540fba9-faa8-4cfb-b907-4e7099429e30-auth-proxy-config\") pod \"machine-approver-56656f9798-jkdkh\" (UID: \"a540fba9-faa8-4cfb-b907-4e7099429e30\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jkdkh" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.784357 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/83d090b3-311a-4b89-aa7d-de1ca0b237d6-images\") pod \"machine-api-operator-5694c8668f-zpjcj\" (UID: \"83d090b3-311a-4b89-aa7d-de1ca0b237d6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zpjcj" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.784376 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4dd56584-ddc5-48e9-be73-9758dca8dddf-audit-policies\") pod \"apiserver-7bbb656c7d-4rfkg\" (UID: \"4dd56584-ddc5-48e9-be73-9758dca8dddf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4rfkg" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.784390 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4dd56584-ddc5-48e9-be73-9758dca8dddf-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-4rfkg\" (UID: \"4dd56584-ddc5-48e9-be73-9758dca8dddf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4rfkg" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.784403 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cbf59768-adfb-48f6-b68b-ebf1675f1807-encryption-config\") pod \"apiserver-76f77b778f-7txvq\" (UID: \"cbf59768-adfb-48f6-b68b-ebf1675f1807\") " pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.784422 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4dd56584-ddc5-48e9-be73-9758dca8dddf-encryption-config\") pod \"apiserver-7bbb656c7d-4rfkg\" (UID: \"4dd56584-ddc5-48e9-be73-9758dca8dddf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4rfkg" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.784439 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/baefc4bd-d927-4cf9-94af-eab8b042b3ca-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-gf8vs\" (UID: \"baefc4bd-d927-4cf9-94af-eab8b042b3ca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gf8vs" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.784453 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74c3828b-92ba-4a4a-bfeb-d5d02facdbdb-serving-cert\") pod \"route-controller-manager-6576b87f9c-m5nl2\" (UID: \"74c3828b-92ba-4a4a-bfeb-d5d02facdbdb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5nl2" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.784492 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdzlz\" (UniqueName: \"kubernetes.io/projected/745b1e30-1f16-4539-847b-88db36eb6d4b-kube-api-access-gdzlz\") pod \"downloads-7954f5f757-5zj2q\" (UID: \"745b1e30-1f16-4539-847b-88db36eb6d4b\") " pod="openshift-console/downloads-7954f5f757-5zj2q" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.784504 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.784525 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cbf59768-adfb-48f6-b68b-ebf1675f1807-image-import-ca\") pod \"apiserver-76f77b778f-7txvq\" (UID: \"cbf59768-adfb-48f6-b68b-ebf1675f1807\") " pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.784543 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.784765 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.784793 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.784544 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4dd56584-ddc5-48e9-be73-9758dca8dddf-audit-dir\") pod \"apiserver-7bbb656c7d-4rfkg\" (UID: \"4dd56584-ddc5-48e9-be73-9758dca8dddf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4rfkg" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.784844 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a49a8a9-82b6-4374-a43a-224f2f9e14a4-serving-cert\") pod \"authentication-operator-69f744f599-xhcb5\" (UID: \"3a49a8a9-82b6-4374-a43a-224f2f9e14a4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xhcb5" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.784862 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a540fba9-faa8-4cfb-b907-4e7099429e30-config\") pod \"machine-approver-56656f9798-jkdkh\" (UID: \"a540fba9-faa8-4cfb-b907-4e7099429e30\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jkdkh" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.784901 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1ddc8dd4-60ac-4d28-8cab-1139c300a29c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-pfnkq\" (UID: \"1ddc8dd4-60ac-4d28-8cab-1139c300a29c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pfnkq" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.784920 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b119a43-b446-4226-9490-a7ba5baf2815-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rzjpv\" (UID: \"9b119a43-b446-4226-9490-a7ba5baf2815\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzjpv" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.784936 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cbf59768-adfb-48f6-b68b-ebf1675f1807-audit-dir\") pod \"apiserver-76f77b778f-7txvq\" (UID: \"cbf59768-adfb-48f6-b68b-ebf1675f1807\") " pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.784947 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.784976 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.784951 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a88ec07-7527-4e9e-ad37-a2ad408658a6-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-q74f5\" (UID: \"9a88ec07-7527-4e9e-ad37-a2ad408658a6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q74f5" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.785100 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a88ec07-7527-4e9e-ad37-a2ad408658a6-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-q74f5\" (UID: \"9a88ec07-7527-4e9e-ad37-a2ad408658a6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q74f5" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.785107 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.785120 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9pz8\" (UniqueName: \"kubernetes.io/projected/9a88ec07-7527-4e9e-ad37-a2ad408658a6-kube-api-access-p9pz8\") pod \"openshift-controller-manager-operator-756b6f6bc6-q74f5\" (UID: \"9a88ec07-7527-4e9e-ad37-a2ad408658a6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q74f5" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.785135 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a49a8a9-82b6-4374-a43a-224f2f9e14a4-config\") pod \"authentication-operator-69f744f599-xhcb5\" (UID: \"3a49a8a9-82b6-4374-a43a-224f2f9e14a4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xhcb5" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.785151 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83d090b3-311a-4b89-aa7d-de1ca0b237d6-config\") pod \"machine-api-operator-5694c8668f-zpjcj\" (UID: \"83d090b3-311a-4b89-aa7d-de1ca0b237d6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zpjcj" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.785386 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.786267 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-wjsth"] Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.786689 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-wjsth" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.792483 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.792868 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.793023 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.793201 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.793356 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.793489 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.793654 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.793778 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.796313 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.796480 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.807284 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.807496 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.807675 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.809092 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.810110 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.811320 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.813560 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.814239 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.814545 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.815089 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.815195 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.815493 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.821349 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.821754 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.821905 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.822006 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.822085 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.822200 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.822262 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.829134 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.830277 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.830672 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.830883 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.830975 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.831218 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.831526 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.831650 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.832617 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.832761 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.834057 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.834131 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.834260 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.834351 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.834456 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.834812 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.838409 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.839049 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-nnvtr"] Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.839728 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-nnvtr" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.840042 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w7g4m"] Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.845331 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-4qz94"] Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.845918 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-4qz94" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.846236 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.852961 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.853912 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.854072 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-bclzx"] Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.854785 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-sbbxx"] Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.855317 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-sbbxx" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.855627 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.855649 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bclzx" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.855990 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.870917 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.871178 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.872359 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.874896 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.875668 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.876278 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lcnlf"] Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.876818 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lcnlf" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.877652 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-97sgp"] Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.877720 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.878084 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-97sgp" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.878953 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-czqrm"] Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.879481 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-czqrm" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.885179 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nvj6q"] Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.885746 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-xzsnp"] Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.912577 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.913869 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nvj6q" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.915280 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.917893 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a540fba9-faa8-4cfb-b907-4e7099429e30-config\") pod \"machine-approver-56656f9798-jkdkh\" (UID: \"a540fba9-faa8-4cfb-b907-4e7099429e30\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jkdkh" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.918016 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-dmtcm\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.918047 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-dmtcm\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.918071 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1ddc8dd4-60ac-4d28-8cab-1139c300a29c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-pfnkq\" (UID: \"1ddc8dd4-60ac-4d28-8cab-1139c300a29c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pfnkq" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.918187 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2cef73f-5410-499e-ae70-491c866c1b48-trusted-ca\") pod \"ingress-operator-5b745b69d9-bclzx\" (UID: \"c2cef73f-5410-499e-ae70-491c866c1b48\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bclzx" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.918208 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b119a43-b446-4226-9490-a7ba5baf2815-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rzjpv\" (UID: \"9b119a43-b446-4226-9490-a7ba5baf2815\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzjpv" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.918325 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9h9d\" (UniqueName: \"kubernetes.io/projected/0400a903-d02e-41b4-99f3-3c7b57744839-kube-api-access-d9h9d\") pod \"dns-operator-744455d44c-sbbxx\" (UID: \"0400a903-d02e-41b4-99f3-3c7b57744839\") " pod="openshift-dns-operator/dns-operator-744455d44c-sbbxx" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.918351 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cbf59768-adfb-48f6-b68b-ebf1675f1807-audit-dir\") pod \"apiserver-76f77b778f-7txvq\" (UID: \"cbf59768-adfb-48f6-b68b-ebf1675f1807\") " pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.918384 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a88ec07-7527-4e9e-ad37-a2ad408658a6-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-q74f5\" (UID: \"9a88ec07-7527-4e9e-ad37-a2ad408658a6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q74f5" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.918484 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a88ec07-7527-4e9e-ad37-a2ad408658a6-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-q74f5\" (UID: \"9a88ec07-7527-4e9e-ad37-a2ad408658a6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q74f5" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.918507 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9pz8\" (UniqueName: \"kubernetes.io/projected/9a88ec07-7527-4e9e-ad37-a2ad408658a6-kube-api-access-p9pz8\") pod \"openshift-controller-manager-operator-756b6f6bc6-q74f5\" (UID: \"9a88ec07-7527-4e9e-ad37-a2ad408658a6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q74f5" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.918633 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a49a8a9-82b6-4374-a43a-224f2f9e14a4-config\") pod \"authentication-operator-69f744f599-xhcb5\" (UID: \"3a49a8a9-82b6-4374-a43a-224f2f9e14a4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xhcb5" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.918659 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83d090b3-311a-4b89-aa7d-de1ca0b237d6-config\") pod \"machine-api-operator-5694c8668f-zpjcj\" (UID: \"83d090b3-311a-4b89-aa7d-de1ca0b237d6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zpjcj" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.918680 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cbf59768-adfb-48f6-b68b-ebf1675f1807-node-pullsecrets\") pod \"apiserver-76f77b778f-7txvq\" (UID: \"cbf59768-adfb-48f6-b68b-ebf1675f1807\") " pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.918797 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baefc4bd-d927-4cf9-94af-eab8b042b3ca-config\") pod \"controller-manager-879f6c89f-gf8vs\" (UID: \"baefc4bd-d927-4cf9-94af-eab8b042b3ca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gf8vs" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.918816 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4dd56584-ddc5-48e9-be73-9758dca8dddf-etcd-client\") pod \"apiserver-7bbb656c7d-4rfkg\" (UID: \"4dd56584-ddc5-48e9-be73-9758dca8dddf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4rfkg" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.919077 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-xzsnp" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.923831 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.924846 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cbf59768-adfb-48f6-b68b-ebf1675f1807-audit-dir\") pod \"apiserver-76f77b778f-7txvq\" (UID: \"cbf59768-adfb-48f6-b68b-ebf1675f1807\") " pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.925566 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a540fba9-faa8-4cfb-b907-4e7099429e30-config\") pod \"machine-approver-56656f9798-jkdkh\" (UID: \"a540fba9-faa8-4cfb-b907-4e7099429e30\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jkdkh" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.928372 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cbf59768-adfb-48f6-b68b-ebf1675f1807-node-pullsecrets\") pod \"apiserver-76f77b778f-7txvq\" (UID: \"cbf59768-adfb-48f6-b68b-ebf1675f1807\") " pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.928667 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9b119a43-b446-4226-9490-a7ba5baf2815-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rzjpv\" (UID: \"9b119a43-b446-4226-9490-a7ba5baf2815\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzjpv" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.928720 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c2cef73f-5410-499e-ae70-491c866c1b48-bound-sa-token\") pod \"ingress-operator-5b745b69d9-bclzx\" (UID: \"c2cef73f-5410-499e-ae70-491c866c1b48\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bclzx" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.928751 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-dmtcm\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.928790 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cbf59768-adfb-48f6-b68b-ebf1675f1807-audit\") pod \"apiserver-76f77b778f-7txvq\" (UID: \"cbf59768-adfb-48f6-b68b-ebf1675f1807\") " pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.928817 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mvxg\" (UniqueName: \"kubernetes.io/projected/9b119a43-b446-4226-9490-a7ba5baf2815-kube-api-access-6mvxg\") pod \"cluster-image-registry-operator-dc59b4c8b-rzjpv\" (UID: \"9b119a43-b446-4226-9490-a7ba5baf2815\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzjpv" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.928846 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbf59768-adfb-48f6-b68b-ebf1675f1807-config\") pod \"apiserver-76f77b778f-7txvq\" (UID: \"cbf59768-adfb-48f6-b68b-ebf1675f1807\") " pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.928868 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-dmtcm\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.928933 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x58lz\" (UniqueName: \"kubernetes.io/projected/a540fba9-faa8-4cfb-b907-4e7099429e30-kube-api-access-x58lz\") pod \"machine-approver-56656f9798-jkdkh\" (UID: \"a540fba9-faa8-4cfb-b907-4e7099429e30\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jkdkh" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.928957 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-dmtcm\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.928994 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4dd56584-ddc5-48e9-be73-9758dca8dddf-serving-cert\") pod \"apiserver-7bbb656c7d-4rfkg\" (UID: \"4dd56584-ddc5-48e9-be73-9758dca8dddf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4rfkg" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.929026 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/402d584b-6176-4cee-8e27-cc233b48feec-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-mt595\" (UID: \"402d584b-6176-4cee-8e27-cc233b48feec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mt595" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.929057 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74c3828b-92ba-4a4a-bfeb-d5d02facdbdb-config\") pod \"route-controller-manager-6576b87f9c-m5nl2\" (UID: \"74c3828b-92ba-4a4a-bfeb-d5d02facdbdb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5nl2" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.929077 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-dmtcm\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.929102 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-dmtcm\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.929126 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/21469e62-0345-41f0-a07b-eac67df38faf-audit-policies\") pod \"oauth-openshift-558db77b4-dmtcm\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.929157 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cbf59768-adfb-48f6-b68b-ebf1675f1807-etcd-client\") pod \"apiserver-76f77b778f-7txvq\" (UID: \"cbf59768-adfb-48f6-b68b-ebf1675f1807\") " pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.929178 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlphk\" (UniqueName: \"kubernetes.io/projected/21469e62-0345-41f0-a07b-eac67df38faf-kube-api-access-tlphk\") pod \"oauth-openshift-558db77b4-dmtcm\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.929206 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/baefc4bd-d927-4cf9-94af-eab8b042b3ca-serving-cert\") pod \"controller-manager-879f6c89f-gf8vs\" (UID: \"baefc4bd-d927-4cf9-94af-eab8b042b3ca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gf8vs" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.929229 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-dmtcm\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.929258 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5phmx\" (UniqueName: \"kubernetes.io/projected/9af91113-a315-4416-a1f2-6566c16278cf-kube-api-access-5phmx\") pod \"openshift-config-operator-7777fb866f-ms8lf\" (UID: \"9af91113-a315-4416-a1f2-6566c16278cf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ms8lf" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.929285 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc88x\" (UniqueName: \"kubernetes.io/projected/cbf59768-adfb-48f6-b68b-ebf1675f1807-kube-api-access-xc88x\") pod \"apiserver-76f77b778f-7txvq\" (UID: \"cbf59768-adfb-48f6-b68b-ebf1675f1807\") " pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.929302 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhp6x\" (UniqueName: \"kubernetes.io/projected/1ddc8dd4-60ac-4d28-8cab-1139c300a29c-kube-api-access-jhp6x\") pod \"cluster-samples-operator-665b6dd947-pfnkq\" (UID: \"1ddc8dd4-60ac-4d28-8cab-1139c300a29c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pfnkq" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.929323 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4dd56584-ddc5-48e9-be73-9758dca8dddf-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-4rfkg\" (UID: \"4dd56584-ddc5-48e9-be73-9758dca8dddf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4rfkg" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.929361 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbvlq\" (UniqueName: \"kubernetes.io/projected/3a49a8a9-82b6-4374-a43a-224f2f9e14a4-kube-api-access-rbvlq\") pod \"authentication-operator-69f744f599-xhcb5\" (UID: \"3a49a8a9-82b6-4374-a43a-224f2f9e14a4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xhcb5" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.929387 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/baefc4bd-d927-4cf9-94af-eab8b042b3ca-client-ca\") pod \"controller-manager-879f6c89f-gf8vs\" (UID: \"baefc4bd-d927-4cf9-94af-eab8b042b3ca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gf8vs" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.929411 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jh8g\" (UniqueName: \"kubernetes.io/projected/83d090b3-311a-4b89-aa7d-de1ca0b237d6-kube-api-access-9jh8g\") pod \"machine-api-operator-5694c8668f-zpjcj\" (UID: \"83d090b3-311a-4b89-aa7d-de1ca0b237d6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zpjcj" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.929432 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs6x9\" (UniqueName: \"kubernetes.io/projected/74c3828b-92ba-4a4a-bfeb-d5d02facdbdb-kube-api-access-bs6x9\") pod \"route-controller-manager-6576b87f9c-m5nl2\" (UID: \"74c3828b-92ba-4a4a-bfeb-d5d02facdbdb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5nl2" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.929462 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a49a8a9-82b6-4374-a43a-224f2f9e14a4-service-ca-bundle\") pod \"authentication-operator-69f744f599-xhcb5\" (UID: \"3a49a8a9-82b6-4374-a43a-224f2f9e14a4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xhcb5" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.929486 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0400a903-d02e-41b4-99f3-3c7b57744839-metrics-tls\") pod \"dns-operator-744455d44c-sbbxx\" (UID: \"0400a903-d02e-41b4-99f3-3c7b57744839\") " pod="openshift-dns-operator/dns-operator-744455d44c-sbbxx" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.929513 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a49a8a9-82b6-4374-a43a-224f2f9e14a4-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-xhcb5\" (UID: \"3a49a8a9-82b6-4374-a43a-224f2f9e14a4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xhcb5" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.929536 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/21469e62-0345-41f0-a07b-eac67df38faf-audit-dir\") pod \"oauth-openshift-558db77b4-dmtcm\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.929560 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9af91113-a315-4416-a1f2-6566c16278cf-serving-cert\") pod \"openshift-config-operator-7777fb866f-ms8lf\" (UID: \"9af91113-a315-4416-a1f2-6566c16278cf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ms8lf" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.929582 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74c3828b-92ba-4a4a-bfeb-d5d02facdbdb-client-ca\") pod \"route-controller-manager-6576b87f9c-m5nl2\" (UID: \"74c3828b-92ba-4a4a-bfeb-d5d02facdbdb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5nl2" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.929604 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/402d584b-6176-4cee-8e27-cc233b48feec-config\") pod \"openshift-apiserver-operator-796bbdcf4f-mt595\" (UID: \"402d584b-6176-4cee-8e27-cc233b48feec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mt595" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.929623 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ksbp\" (UniqueName: \"kubernetes.io/projected/402d584b-6176-4cee-8e27-cc233b48feec-kube-api-access-4ksbp\") pod \"openshift-apiserver-operator-796bbdcf4f-mt595\" (UID: \"402d584b-6176-4cee-8e27-cc233b48feec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mt595" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.929645 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/83d090b3-311a-4b89-aa7d-de1ca0b237d6-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-zpjcj\" (UID: \"83d090b3-311a-4b89-aa7d-de1ca0b237d6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zpjcj" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.929668 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm8fw\" (UniqueName: \"kubernetes.io/projected/4dd56584-ddc5-48e9-be73-9758dca8dddf-kube-api-access-wm8fw\") pod \"apiserver-7bbb656c7d-4rfkg\" (UID: \"4dd56584-ddc5-48e9-be73-9758dca8dddf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4rfkg" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.929692 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbf59768-adfb-48f6-b68b-ebf1675f1807-serving-cert\") pod \"apiserver-76f77b778f-7txvq\" (UID: \"cbf59768-adfb-48f6-b68b-ebf1675f1807\") " pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.929712 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrqkz\" (UniqueName: \"kubernetes.io/projected/baefc4bd-d927-4cf9-94af-eab8b042b3ca-kube-api-access-rrqkz\") pod \"controller-manager-879f6c89f-gf8vs\" (UID: \"baefc4bd-d927-4cf9-94af-eab8b042b3ca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gf8vs" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.929735 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a540fba9-faa8-4cfb-b907-4e7099429e30-machine-approver-tls\") pod \"machine-approver-56656f9798-jkdkh\" (UID: \"a540fba9-faa8-4cfb-b907-4e7099429e30\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jkdkh" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.929758 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c2cef73f-5410-499e-ae70-491c866c1b48-metrics-tls\") pod \"ingress-operator-5b745b69d9-bclzx\" (UID: \"c2cef73f-5410-499e-ae70-491c866c1b48\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bclzx" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.929893 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-dmtcm\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.929925 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cbf59768-adfb-48f6-b68b-ebf1675f1807-etcd-serving-ca\") pod \"apiserver-76f77b778f-7txvq\" (UID: \"cbf59768-adfb-48f6-b68b-ebf1675f1807\") " pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.929952 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9af91113-a315-4416-a1f2-6566c16278cf-available-featuregates\") pod \"openshift-config-operator-7777fb866f-ms8lf\" (UID: \"9af91113-a315-4416-a1f2-6566c16278cf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ms8lf" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.929977 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9b119a43-b446-4226-9490-a7ba5baf2815-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rzjpv\" (UID: \"9b119a43-b446-4226-9490-a7ba5baf2815\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzjpv" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.930001 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-dmtcm\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.930032 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbf59768-adfb-48f6-b68b-ebf1675f1807-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7txvq\" (UID: \"cbf59768-adfb-48f6-b68b-ebf1675f1807\") " pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.930055 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a540fba9-faa8-4cfb-b907-4e7099429e30-auth-proxy-config\") pod \"machine-approver-56656f9798-jkdkh\" (UID: \"a540fba9-faa8-4cfb-b907-4e7099429e30\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jkdkh" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.930075 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/83d090b3-311a-4b89-aa7d-de1ca0b237d6-images\") pod \"machine-api-operator-5694c8668f-zpjcj\" (UID: \"83d090b3-311a-4b89-aa7d-de1ca0b237d6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zpjcj" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.930095 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4dd56584-ddc5-48e9-be73-9758dca8dddf-audit-policies\") pod \"apiserver-7bbb656c7d-4rfkg\" (UID: \"4dd56584-ddc5-48e9-be73-9758dca8dddf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4rfkg" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.930114 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4dd56584-ddc5-48e9-be73-9758dca8dddf-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-4rfkg\" (UID: \"4dd56584-ddc5-48e9-be73-9758dca8dddf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4rfkg" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.930136 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cbf59768-adfb-48f6-b68b-ebf1675f1807-encryption-config\") pod \"apiserver-76f77b778f-7txvq\" (UID: \"cbf59768-adfb-48f6-b68b-ebf1675f1807\") " pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.930153 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4dd56584-ddc5-48e9-be73-9758dca8dddf-encryption-config\") pod \"apiserver-7bbb656c7d-4rfkg\" (UID: \"4dd56584-ddc5-48e9-be73-9758dca8dddf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4rfkg" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.930175 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/baefc4bd-d927-4cf9-94af-eab8b042b3ca-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-gf8vs\" (UID: \"baefc4bd-d927-4cf9-94af-eab8b042b3ca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gf8vs" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.930195 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74c3828b-92ba-4a4a-bfeb-d5d02facdbdb-serving-cert\") pod \"route-controller-manager-6576b87f9c-m5nl2\" (UID: \"74c3828b-92ba-4a4a-bfeb-d5d02facdbdb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5nl2" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.930223 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdzlz\" (UniqueName: \"kubernetes.io/projected/745b1e30-1f16-4539-847b-88db36eb6d4b-kube-api-access-gdzlz\") pod \"downloads-7954f5f757-5zj2q\" (UID: \"745b1e30-1f16-4539-847b-88db36eb6d4b\") " pod="openshift-console/downloads-7954f5f757-5zj2q" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.930248 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cbf59768-adfb-48f6-b68b-ebf1675f1807-image-import-ca\") pod \"apiserver-76f77b778f-7txvq\" (UID: \"cbf59768-adfb-48f6-b68b-ebf1675f1807\") " pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.930371 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4dd56584-ddc5-48e9-be73-9758dca8dddf-audit-dir\") pod \"apiserver-7bbb656c7d-4rfkg\" (UID: \"4dd56584-ddc5-48e9-be73-9758dca8dddf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4rfkg" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.944123 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-dmtcm\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.944205 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-kvgs8"] Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.944218 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a49a8a9-82b6-4374-a43a-224f2f9e14a4-serving-cert\") pod \"authentication-operator-69f744f599-xhcb5\" (UID: \"3a49a8a9-82b6-4374-a43a-224f2f9e14a4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xhcb5" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.944696 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d92f\" (UniqueName: \"kubernetes.io/projected/c2cef73f-5410-499e-ae70-491c866c1b48-kube-api-access-6d92f\") pod \"ingress-operator-5b745b69d9-bclzx\" (UID: \"c2cef73f-5410-499e-ae70-491c866c1b48\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bclzx" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.940748 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4dd56584-ddc5-48e9-be73-9758dca8dddf-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-4rfkg\" (UID: \"4dd56584-ddc5-48e9-be73-9758dca8dddf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4rfkg" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.940839 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4dd56584-ddc5-48e9-be73-9758dca8dddf-audit-dir\") pod \"apiserver-7bbb656c7d-4rfkg\" (UID: \"4dd56584-ddc5-48e9-be73-9758dca8dddf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4rfkg" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.942824 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4dd56584-ddc5-48e9-be73-9758dca8dddf-etcd-client\") pod \"apiserver-7bbb656c7d-4rfkg\" (UID: \"4dd56584-ddc5-48e9-be73-9758dca8dddf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4rfkg" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.943333 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1ddc8dd4-60ac-4d28-8cab-1139c300a29c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-pfnkq\" (UID: \"1ddc8dd4-60ac-4d28-8cab-1139c300a29c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pfnkq" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.936001 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cbf59768-adfb-48f6-b68b-ebf1675f1807-audit\") pod \"apiserver-76f77b778f-7txvq\" (UID: \"cbf59768-adfb-48f6-b68b-ebf1675f1807\") " pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.933517 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83d090b3-311a-4b89-aa7d-de1ca0b237d6-config\") pod \"machine-api-operator-5694c8668f-zpjcj\" (UID: \"83d090b3-311a-4b89-aa7d-de1ca0b237d6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zpjcj" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.934679 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baefc4bd-d927-4cf9-94af-eab8b042b3ca-config\") pod \"controller-manager-879f6c89f-gf8vs\" (UID: \"baefc4bd-d927-4cf9-94af-eab8b042b3ca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gf8vs" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.938285 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74c3828b-92ba-4a4a-bfeb-d5d02facdbdb-config\") pod \"route-controller-manager-6576b87f9c-m5nl2\" (UID: \"74c3828b-92ba-4a4a-bfeb-d5d02facdbdb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5nl2" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.977356 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4dd56584-ddc5-48e9-be73-9758dca8dddf-encryption-config\") pod \"apiserver-7bbb656c7d-4rfkg\" (UID: \"4dd56584-ddc5-48e9-be73-9758dca8dddf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4rfkg" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.977638 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.981184 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-kvgs8" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.982596 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/83d090b3-311a-4b89-aa7d-de1ca0b237d6-images\") pod \"machine-api-operator-5694c8668f-zpjcj\" (UID: \"83d090b3-311a-4b89-aa7d-de1ca0b237d6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zpjcj" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.983557 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74c3828b-92ba-4a4a-bfeb-d5d02facdbdb-client-ca\") pod \"route-controller-manager-6576b87f9c-m5nl2\" (UID: \"74c3828b-92ba-4a4a-bfeb-d5d02facdbdb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5nl2" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.989616 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a88ec07-7527-4e9e-ad37-a2ad408658a6-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-q74f5\" (UID: \"9a88ec07-7527-4e9e-ad37-a2ad408658a6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q74f5" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.989616 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/402d584b-6176-4cee-8e27-cc233b48feec-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-mt595\" (UID: \"402d584b-6176-4cee-8e27-cc233b48feec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mt595" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.990171 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4dd56584-ddc5-48e9-be73-9758dca8dddf-serving-cert\") pod \"apiserver-7bbb656c7d-4rfkg\" (UID: \"4dd56584-ddc5-48e9-be73-9758dca8dddf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4rfkg" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.990970 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9af91113-a315-4416-a1f2-6566c16278cf-serving-cert\") pod \"openshift-config-operator-7777fb866f-ms8lf\" (UID: \"9af91113-a315-4416-a1f2-6566c16278cf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ms8lf" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.991333 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/baefc4bd-d927-4cf9-94af-eab8b042b3ca-serving-cert\") pod \"controller-manager-879f6c89f-gf8vs\" (UID: \"baefc4bd-d927-4cf9-94af-eab8b042b3ca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gf8vs" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.991704 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9af91113-a315-4416-a1f2-6566c16278cf-available-featuregates\") pod \"openshift-config-operator-7777fb866f-ms8lf\" (UID: \"9af91113-a315-4416-a1f2-6566c16278cf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ms8lf" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.991789 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a88ec07-7527-4e9e-ad37-a2ad408658a6-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-q74f5\" (UID: \"9a88ec07-7527-4e9e-ad37-a2ad408658a6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q74f5" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.991854 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a540fba9-faa8-4cfb-b907-4e7099429e30-machine-approver-tls\") pod \"machine-approver-56656f9798-jkdkh\" (UID: \"a540fba9-faa8-4cfb-b907-4e7099429e30\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jkdkh" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.991883 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4dd56584-ddc5-48e9-be73-9758dca8dddf-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-4rfkg\" (UID: \"4dd56584-ddc5-48e9-be73-9758dca8dddf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4rfkg" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.992281 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a540fba9-faa8-4cfb-b907-4e7099429e30-auth-proxy-config\") pod \"machine-approver-56656f9798-jkdkh\" (UID: \"a540fba9-faa8-4cfb-b907-4e7099429e30\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jkdkh" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.992406 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4dd56584-ddc5-48e9-be73-9758dca8dddf-audit-policies\") pod \"apiserver-7bbb656c7d-4rfkg\" (UID: \"4dd56584-ddc5-48e9-be73-9758dca8dddf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4rfkg" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.993208 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.993766 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/baefc4bd-d927-4cf9-94af-eab8b042b3ca-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-gf8vs\" (UID: \"baefc4bd-d927-4cf9-94af-eab8b042b3ca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gf8vs" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.994131 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cbf59768-adfb-48f6-b68b-ebf1675f1807-image-import-ca\") pod \"apiserver-76f77b778f-7txvq\" (UID: \"cbf59768-adfb-48f6-b68b-ebf1675f1807\") " pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.994623 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/baefc4bd-d927-4cf9-94af-eab8b042b3ca-client-ca\") pod \"controller-manager-879f6c89f-gf8vs\" (UID: \"baefc4bd-d927-4cf9-94af-eab8b042b3ca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gf8vs" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.994799 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.994927 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-sjv9d"] Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.995641 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-sjv9d" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.996032 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-j2rkz"] Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.996590 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j2rkz" Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.996784 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-67p2b"] Jan 31 03:50:25 crc kubenswrapper[4667]: I0131 03:50:25.997311 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-67p2b" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.002060 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.005296 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cbf59768-adfb-48f6-b68b-ebf1675f1807-encryption-config\") pod \"apiserver-76f77b778f-7txvq\" (UID: \"cbf59768-adfb-48f6-b68b-ebf1675f1807\") " pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.007597 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/83d090b3-311a-4b89-aa7d-de1ca0b237d6-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-zpjcj\" (UID: \"83d090b3-311a-4b89-aa7d-de1ca0b237d6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zpjcj" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.007710 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74c3828b-92ba-4a4a-bfeb-d5d02facdbdb-serving-cert\") pod \"route-controller-manager-6576b87f9c-m5nl2\" (UID: \"74c3828b-92ba-4a4a-bfeb-d5d02facdbdb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5nl2" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.018368 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vsb97"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.019347 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7pbrg"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.019687 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7pbrg" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.019909 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vsb97" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.020919 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzjpv"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.020949 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-dndtw"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.021397 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-dndtw" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.021518 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-stnvq"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.022067 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-stnvq" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.027121 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.027756 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rs9cq"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.028382 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rs9cq" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.037133 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.037269 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s9tvt"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.037715 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497185-9b5gp"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.038089 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497185-9b5gp" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.038261 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s9tvt" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.041942 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9pdkp"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.042663 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9pdkp" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.051189 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-5zj2q"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.052213 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mt595"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.054078 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-ms8lf"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.054583 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.055520 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c2cef73f-5410-499e-ae70-491c866c1b48-metrics-tls\") pod \"ingress-operator-5b745b69d9-bclzx\" (UID: \"c2cef73f-5410-499e-ae70-491c866c1b48\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bclzx" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.055756 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-dmtcm\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.056067 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-dmtcm\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.056247 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-dmtcm\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.056362 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d92f\" (UniqueName: \"kubernetes.io/projected/c2cef73f-5410-499e-ae70-491c866c1b48-kube-api-access-6d92f\") pod \"ingress-operator-5b745b69d9-bclzx\" (UID: \"c2cef73f-5410-499e-ae70-491c866c1b48\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bclzx" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.056489 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-dmtcm\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.056590 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-dmtcm\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.056687 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2cef73f-5410-499e-ae70-491c866c1b48-trusted-ca\") pod \"ingress-operator-5b745b69d9-bclzx\" (UID: \"c2cef73f-5410-499e-ae70-491c866c1b48\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bclzx" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.056784 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9h9d\" (UniqueName: \"kubernetes.io/projected/0400a903-d02e-41b4-99f3-3c7b57744839-kube-api-access-d9h9d\") pod \"dns-operator-744455d44c-sbbxx\" (UID: \"0400a903-d02e-41b4-99f3-3c7b57744839\") " pod="openshift-dns-operator/dns-operator-744455d44c-sbbxx" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.056918 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c2cef73f-5410-499e-ae70-491c866c1b48-bound-sa-token\") pod \"ingress-operator-5b745b69d9-bclzx\" (UID: \"c2cef73f-5410-499e-ae70-491c866c1b48\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bclzx" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.057009 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-dmtcm\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.057102 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-dmtcm\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.057168 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-dmtcm\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.057253 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-dmtcm\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.057361 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-dmtcm\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.057432 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/21469e62-0345-41f0-a07b-eac67df38faf-audit-policies\") pod \"oauth-openshift-558db77b4-dmtcm\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.057511 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlphk\" (UniqueName: \"kubernetes.io/projected/21469e62-0345-41f0-a07b-eac67df38faf-kube-api-access-tlphk\") pod \"oauth-openshift-558db77b4-dmtcm\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.057586 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-dmtcm\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.057702 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0400a903-d02e-41b4-99f3-3c7b57744839-metrics-tls\") pod \"dns-operator-744455d44c-sbbxx\" (UID: \"0400a903-d02e-41b4-99f3-3c7b57744839\") " pod="openshift-dns-operator/dns-operator-744455d44c-sbbxx" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.057796 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/21469e62-0345-41f0-a07b-eac67df38faf-audit-dir\") pod \"oauth-openshift-558db77b4-dmtcm\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.057968 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/21469e62-0345-41f0-a07b-eac67df38faf-audit-dir\") pod \"oauth-openshift-558db77b4-dmtcm\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.060354 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-7txvq"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.064279 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-dmtcm\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.066154 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/21469e62-0345-41f0-a07b-eac67df38faf-audit-policies\") pod \"oauth-openshift-558db77b4-dmtcm\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.069393 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-dmtcm\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.069793 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-dmtcm\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.069943 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nrjrr"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.070605 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nrjrr" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.071000 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-dmtcm\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.071331 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-dmtcm\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.071366 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-82j72"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.073282 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-82j72" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.072227 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-dmtcm\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.074170 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-dmtcm\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.078970 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-dmtcm\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.079146 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-dmtcm\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.079312 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q74f5"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.080181 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-dmtcm\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.080429 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-dmtcm\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.081162 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gf8vs"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.082308 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.090934 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5nl2"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.091336 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-xhcb5"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.092561 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-xf9cn"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.093381 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-xf9cn" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.094186 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-kfr9j"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.094990 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-kfr9j" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.095171 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-67p2b"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.099702 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w7g4m"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.099733 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-xzsnp"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.099743 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-4rfkg"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.102997 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lcnlf"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.103031 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-4qz94"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.103168 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rs9cq"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.112498 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-nnvtr"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.112584 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vsb97"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.119946 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-zpjcj"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.119992 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pfnkq"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.123758 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.138224 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-wjsth"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.147845 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.153845 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.155471 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-dndtw"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.157481 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-97sgp"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.159535 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dmtcm"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.161906 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-sjv9d"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.163067 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0400a903-d02e-41b4-99f3-3c7b57744839-metrics-tls\") pod \"dns-operator-744455d44c-sbbxx\" (UID: \"0400a903-d02e-41b4-99f3-3c7b57744839\") " pod="openshift-dns-operator/dns-operator-744455d44c-sbbxx" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.163637 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nvj6q"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.166381 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7pbrg"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.166407 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497185-9b5gp"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.167669 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-82j72"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.169631 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s9tvt"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.170092 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-pgpmm"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.170918 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-pgpmm" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.172019 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-czqrm"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.173371 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-bclzx"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.174234 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-kfr9j"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.181545 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.183687 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-xf9cn"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.185005 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-sbbxx"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.185798 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-stnvq"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.188908 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-j2rkz"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.189015 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9pdkp"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.189809 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nrjrr"] Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.193035 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c2cef73f-5410-499e-ae70-491c866c1b48-metrics-tls\") pod \"ingress-operator-5b745b69d9-bclzx\" (UID: \"c2cef73f-5410-499e-ae70-491c866c1b48\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bclzx" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.193821 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.213956 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.233766 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.278972 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.286495 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2cef73f-5410-499e-ae70-491c866c1b48-trusted-ca\") pod \"ingress-operator-5b745b69d9-bclzx\" (UID: \"c2cef73f-5410-499e-ae70-491c866c1b48\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bclzx" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.295431 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.313996 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.339073 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.354653 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.375397 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.394364 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.415148 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.434193 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.454331 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.474810 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.494156 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.513638 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.533679 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.554425 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.573983 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.593947 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.614766 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.633975 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.654987 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.674716 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.717150 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs6x9\" (UniqueName: \"kubernetes.io/projected/74c3828b-92ba-4a4a-bfeb-d5d02facdbdb-kube-api-access-bs6x9\") pod \"route-controller-manager-6576b87f9c-m5nl2\" (UID: \"74c3828b-92ba-4a4a-bfeb-d5d02facdbdb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5nl2" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.734151 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mvxg\" (UniqueName: \"kubernetes.io/projected/9b119a43-b446-4226-9490-a7ba5baf2815-kube-api-access-6mvxg\") pod \"cluster-image-registry-operator-dc59b4c8b-rzjpv\" (UID: \"9b119a43-b446-4226-9490-a7ba5baf2815\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzjpv" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.753260 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x58lz\" (UniqueName: \"kubernetes.io/projected/a540fba9-faa8-4cfb-b907-4e7099429e30-kube-api-access-x58lz\") pod \"machine-approver-56656f9798-jkdkh\" (UID: \"a540fba9-faa8-4cfb-b907-4e7099429e30\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jkdkh" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.808325 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jh8g\" (UniqueName: \"kubernetes.io/projected/83d090b3-311a-4b89-aa7d-de1ca0b237d6-kube-api-access-9jh8g\") pod \"machine-api-operator-5694c8668f-zpjcj\" (UID: \"83d090b3-311a-4b89-aa7d-de1ca0b237d6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zpjcj" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.852705 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhp6x\" (UniqueName: \"kubernetes.io/projected/1ddc8dd4-60ac-4d28-8cab-1139c300a29c-kube-api-access-jhp6x\") pod \"cluster-samples-operator-665b6dd947-pfnkq\" (UID: \"1ddc8dd4-60ac-4d28-8cab-1139c300a29c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pfnkq" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.855177 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.875027 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.894900 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.914965 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 31 03:50:26 crc kubenswrapper[4667]: E0131 03:50:26.925307 4667 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: failed to sync configmap cache: timed out waiting for the condition Jan 31 03:50:26 crc kubenswrapper[4667]: E0131 03:50:26.925404 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9b119a43-b446-4226-9490-a7ba5baf2815-trusted-ca podName:9b119a43-b446-4226-9490-a7ba5baf2815 nodeName:}" failed. No retries permitted until 2026-01-31 03:50:27.425375062 +0000 UTC m=+150.941710361 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/9b119a43-b446-4226-9490-a7ba5baf2815-trusted-ca") pod "cluster-image-registry-operator-dc59b4c8b-rzjpv" (UID: "9b119a43-b446-4226-9490-a7ba5baf2815") : failed to sync configmap cache: timed out waiting for the condition Jan 31 03:50:26 crc kubenswrapper[4667]: E0131 03:50:26.926542 4667 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 31 03:50:26 crc kubenswrapper[4667]: E0131 03:50:26.926662 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3a49a8a9-82b6-4374-a43a-224f2f9e14a4-config podName:3a49a8a9-82b6-4374-a43a-224f2f9e14a4 nodeName:}" failed. No retries permitted until 2026-01-31 03:50:27.426636765 +0000 UTC m=+150.942972094 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/3a49a8a9-82b6-4374-a43a-224f2f9e14a4-config") pod "authentication-operator-69f744f599-xhcb5" (UID: "3a49a8a9-82b6-4374-a43a-224f2f9e14a4") : failed to sync configmap cache: timed out waiting for the condition Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.934336 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 31 03:50:26 crc kubenswrapper[4667]: E0131 03:50:26.936588 4667 configmap.go:193] Couldn't get configMap openshift-apiserver/config: failed to sync configmap cache: timed out waiting for the condition Jan 31 03:50:26 crc kubenswrapper[4667]: E0131 03:50:26.936662 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cbf59768-adfb-48f6-b68b-ebf1675f1807-config podName:cbf59768-adfb-48f6-b68b-ebf1675f1807 nodeName:}" failed. No retries permitted until 2026-01-31 03:50:27.436639435 +0000 UTC m=+150.952974754 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/cbf59768-adfb-48f6-b68b-ebf1675f1807-config") pod "apiserver-76f77b778f-7txvq" (UID: "cbf59768-adfb-48f6-b68b-ebf1675f1807") : failed to sync configmap cache: timed out waiting for the condition Jan 31 03:50:26 crc kubenswrapper[4667]: E0131 03:50:26.938772 4667 secret.go:188] Couldn't get secret openshift-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Jan 31 03:50:26 crc kubenswrapper[4667]: E0131 03:50:26.938835 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbf59768-adfb-48f6-b68b-ebf1675f1807-etcd-client podName:cbf59768-adfb-48f6-b68b-ebf1675f1807 nodeName:}" failed. No retries permitted until 2026-01-31 03:50:27.438819341 +0000 UTC m=+150.955154740 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/cbf59768-adfb-48f6-b68b-ebf1675f1807-etcd-client") pod "apiserver-76f77b778f-7txvq" (UID: "cbf59768-adfb-48f6-b68b-ebf1675f1807") : failed to sync secret cache: timed out waiting for the condition Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.939051 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5nl2" Jan 31 03:50:26 crc kubenswrapper[4667]: E0131 03:50:26.941232 4667 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 31 03:50:26 crc kubenswrapper[4667]: E0131 03:50:26.941278 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3a49a8a9-82b6-4374-a43a-224f2f9e14a4-trusted-ca-bundle podName:3a49a8a9-82b6-4374-a43a-224f2f9e14a4 nodeName:}" failed. No retries permitted until 2026-01-31 03:50:27.441265695 +0000 UTC m=+150.957601004 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/3a49a8a9-82b6-4374-a43a-224f2f9e14a4-trusted-ca-bundle") pod "authentication-operator-69f744f599-xhcb5" (UID: "3a49a8a9-82b6-4374-a43a-224f2f9e14a4") : failed to sync configmap cache: timed out waiting for the condition Jan 31 03:50:26 crc kubenswrapper[4667]: E0131 03:50:26.941345 4667 secret.go:188] Couldn't get secret openshift-image-registry/image-registry-operator-tls: failed to sync secret cache: timed out waiting for the condition Jan 31 03:50:26 crc kubenswrapper[4667]: E0131 03:50:26.941396 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9b119a43-b446-4226-9490-a7ba5baf2815-image-registry-operator-tls podName:9b119a43-b446-4226-9490-a7ba5baf2815 nodeName:}" failed. No retries permitted until 2026-01-31 03:50:27.441381008 +0000 UTC m=+150.957716307 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/9b119a43-b446-4226-9490-a7ba5baf2815-image-registry-operator-tls") pod "cluster-image-registry-operator-dc59b4c8b-rzjpv" (UID: "9b119a43-b446-4226-9490-a7ba5baf2815") : failed to sync secret cache: timed out waiting for the condition Jan 31 03:50:26 crc kubenswrapper[4667]: E0131 03:50:26.941423 4667 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 31 03:50:26 crc kubenswrapper[4667]: E0131 03:50:26.941573 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3a49a8a9-82b6-4374-a43a-224f2f9e14a4-service-ca-bundle podName:3a49a8a9-82b6-4374-a43a-224f2f9e14a4 nodeName:}" failed. No retries permitted until 2026-01-31 03:50:27.441531322 +0000 UTC m=+150.957866851 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/3a49a8a9-82b6-4374-a43a-224f2f9e14a4-service-ca-bundle") pod "authentication-operator-69f744f599-xhcb5" (UID: "3a49a8a9-82b6-4374-a43a-224f2f9e14a4") : failed to sync configmap cache: timed out waiting for the condition Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.945446 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pfnkq" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.955404 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.975441 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 31 03:50:26 crc kubenswrapper[4667]: E0131 03:50:26.983504 4667 secret.go:188] Couldn't get secret openshift-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 31 03:50:26 crc kubenswrapper[4667]: E0131 03:50:26.983561 4667 secret.go:188] Couldn't get secret openshift-authentication-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 31 03:50:26 crc kubenswrapper[4667]: E0131 03:50:26.983827 4667 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 31 03:50:26 crc kubenswrapper[4667]: E0131 03:50:26.989184 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a49a8a9-82b6-4374-a43a-224f2f9e14a4-serving-cert podName:3a49a8a9-82b6-4374-a43a-224f2f9e14a4 nodeName:}" failed. No retries permitted until 2026-01-31 03:50:27.483683108 +0000 UTC m=+151.000018437 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3a49a8a9-82b6-4374-a43a-224f2f9e14a4-serving-cert") pod "authentication-operator-69f744f599-xhcb5" (UID: "3a49a8a9-82b6-4374-a43a-224f2f9e14a4") : failed to sync secret cache: timed out waiting for the condition Jan 31 03:50:26 crc kubenswrapper[4667]: E0131 03:50:26.989371 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbf59768-adfb-48f6-b68b-ebf1675f1807-serving-cert podName:cbf59768-adfb-48f6-b68b-ebf1675f1807 nodeName:}" failed. No retries permitted until 2026-01-31 03:50:27.489342206 +0000 UTC m=+151.005677535 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/cbf59768-adfb-48f6-b68b-ebf1675f1807-serving-cert") pod "apiserver-76f77b778f-7txvq" (UID: "cbf59768-adfb-48f6-b68b-ebf1675f1807") : failed to sync secret cache: timed out waiting for the condition Jan 31 03:50:26 crc kubenswrapper[4667]: E0131 03:50:26.989449 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/402d584b-6176-4cee-8e27-cc233b48feec-config podName:402d584b-6176-4cee-8e27-cc233b48feec nodeName:}" failed. No retries permitted until 2026-01-31 03:50:27.489389447 +0000 UTC m=+151.005724786 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/402d584b-6176-4cee-8e27-cc233b48feec-config") pod "openshift-apiserver-operator-796bbdcf4f-mt595" (UID: "402d584b-6176-4cee-8e27-cc233b48feec") : failed to sync configmap cache: timed out waiting for the condition Jan 31 03:50:26 crc kubenswrapper[4667]: E0131 03:50:26.992346 4667 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 31 03:50:26 crc kubenswrapper[4667]: E0131 03:50:26.992466 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cbf59768-adfb-48f6-b68b-ebf1675f1807-trusted-ca-bundle podName:cbf59768-adfb-48f6-b68b-ebf1675f1807 nodeName:}" failed. No retries permitted until 2026-01-31 03:50:27.492438156 +0000 UTC m=+151.008773655 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/cbf59768-adfb-48f6-b68b-ebf1675f1807-trusted-ca-bundle") pod "apiserver-76f77b778f-7txvq" (UID: "cbf59768-adfb-48f6-b68b-ebf1675f1807") : failed to sync configmap cache: timed out waiting for the condition Jan 31 03:50:26 crc kubenswrapper[4667]: I0131 03:50:26.993803 4667 request.go:700] Waited for 1.010136725s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token Jan 31 03:50:26 crc kubenswrapper[4667]: E0131 03:50:26.994822 4667 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Jan 31 03:50:26 crc kubenswrapper[4667]: E0131 03:50:26.994956 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cbf59768-adfb-48f6-b68b-ebf1675f1807-etcd-serving-ca podName:cbf59768-adfb-48f6-b68b-ebf1675f1807 nodeName:}" failed. No retries permitted until 2026-01-31 03:50:27.494928421 +0000 UTC m=+151.011263730 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/cbf59768-adfb-48f6-b68b-ebf1675f1807-etcd-serving-ca") pod "apiserver-76f77b778f-7txvq" (UID: "cbf59768-adfb-48f6-b68b-ebf1675f1807") : failed to sync configmap cache: timed out waiting for the condition Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.001293 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-zpjcj" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.017291 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jkdkh" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.018152 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrqkz\" (UniqueName: \"kubernetes.io/projected/baefc4bd-d927-4cf9-94af-eab8b042b3ca-kube-api-access-rrqkz\") pod \"controller-manager-879f6c89f-gf8vs\" (UID: \"baefc4bd-d927-4cf9-94af-eab8b042b3ca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gf8vs" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.035174 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9b119a43-b446-4226-9490-a7ba5baf2815-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rzjpv\" (UID: \"9b119a43-b446-4226-9490-a7ba5baf2815\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzjpv" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.098056 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm8fw\" (UniqueName: \"kubernetes.io/projected/4dd56584-ddc5-48e9-be73-9758dca8dddf-kube-api-access-wm8fw\") pod \"apiserver-7bbb656c7d-4rfkg\" (UID: \"4dd56584-ddc5-48e9-be73-9758dca8dddf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4rfkg" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.099194 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9pz8\" (UniqueName: \"kubernetes.io/projected/9a88ec07-7527-4e9e-ad37-a2ad408658a6-kube-api-access-p9pz8\") pod \"openshift-controller-manager-operator-756b6f6bc6-q74f5\" (UID: \"9a88ec07-7527-4e9e-ad37-a2ad408658a6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q74f5" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.100158 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jkdkh" event={"ID":"a540fba9-faa8-4cfb-b907-4e7099429e30","Type":"ContainerStarted","Data":"92d4e56597523e2c40f4e8a8799b50688cc04827e133a6540d5fe45f4a1bd94a"} Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.113587 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdzlz\" (UniqueName: \"kubernetes.io/projected/745b1e30-1f16-4539-847b-88db36eb6d4b-kube-api-access-gdzlz\") pod \"downloads-7954f5f757-5zj2q\" (UID: \"745b1e30-1f16-4539-847b-88db36eb6d4b\") " pod="openshift-console/downloads-7954f5f757-5zj2q" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.116766 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.135553 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.156332 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.175787 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.195304 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.212060 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4rfkg" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.215052 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.223869 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-gf8vs" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.237929 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.240441 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5nl2"] Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.252760 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q74f5" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.255473 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.259231 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-5zj2q" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.276239 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.280415 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pfnkq"] Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.295263 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.316362 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.335164 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.361893 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.382086 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.396177 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.416157 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.435888 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.456683 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.474497 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.481831 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b119a43-b446-4226-9490-a7ba5baf2815-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rzjpv\" (UID: \"9b119a43-b446-4226-9490-a7ba5baf2815\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzjpv" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.481901 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a49a8a9-82b6-4374-a43a-224f2f9e14a4-config\") pod \"authentication-operator-69f744f599-xhcb5\" (UID: \"3a49a8a9-82b6-4374-a43a-224f2f9e14a4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xhcb5" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.481921 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9b119a43-b446-4226-9490-a7ba5baf2815-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rzjpv\" (UID: \"9b119a43-b446-4226-9490-a7ba5baf2815\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzjpv" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.481954 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbf59768-adfb-48f6-b68b-ebf1675f1807-config\") pod \"apiserver-76f77b778f-7txvq\" (UID: \"cbf59768-adfb-48f6-b68b-ebf1675f1807\") " pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.481990 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cbf59768-adfb-48f6-b68b-ebf1675f1807-etcd-client\") pod \"apiserver-76f77b778f-7txvq\" (UID: \"cbf59768-adfb-48f6-b68b-ebf1675f1807\") " pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.482044 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a49a8a9-82b6-4374-a43a-224f2f9e14a4-service-ca-bundle\") pod \"authentication-operator-69f744f599-xhcb5\" (UID: \"3a49a8a9-82b6-4374-a43a-224f2f9e14a4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xhcb5" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.482064 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a49a8a9-82b6-4374-a43a-224f2f9e14a4-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-xhcb5\" (UID: \"3a49a8a9-82b6-4374-a43a-224f2f9e14a4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xhcb5" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.505227 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.519107 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.529977 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gf8vs"] Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.535344 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.555544 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.563770 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-zpjcj"] Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.566110 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-4rfkg"] Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.576720 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 31 03:50:27 crc kubenswrapper[4667]: W0131 03:50:27.579970 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83d090b3_311a_4b89_aa7d_de1ca0b237d6.slice/crio-0b726707873de72ecadeb63114134ccdf08e104a188f707b55302389a2a49dfb WatchSource:0}: Error finding container 0b726707873de72ecadeb63114134ccdf08e104a188f707b55302389a2a49dfb: Status 404 returned error can't find the container with id 0b726707873de72ecadeb63114134ccdf08e104a188f707b55302389a2a49dfb Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.582764 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/402d584b-6176-4cee-8e27-cc233b48feec-config\") pod \"openshift-apiserver-operator-796bbdcf4f-mt595\" (UID: \"402d584b-6176-4cee-8e27-cc233b48feec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mt595" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.582790 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbf59768-adfb-48f6-b68b-ebf1675f1807-serving-cert\") pod \"apiserver-76f77b778f-7txvq\" (UID: \"cbf59768-adfb-48f6-b68b-ebf1675f1807\") " pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.582809 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cbf59768-adfb-48f6-b68b-ebf1675f1807-etcd-serving-ca\") pod \"apiserver-76f77b778f-7txvq\" (UID: \"cbf59768-adfb-48f6-b68b-ebf1675f1807\") " pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.582827 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbf59768-adfb-48f6-b68b-ebf1675f1807-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7txvq\" (UID: \"cbf59768-adfb-48f6-b68b-ebf1675f1807\") " pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.582854 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a49a8a9-82b6-4374-a43a-224f2f9e14a4-serving-cert\") pod \"authentication-operator-69f744f599-xhcb5\" (UID: \"3a49a8a9-82b6-4374-a43a-224f2f9e14a4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xhcb5" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.597125 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.600003 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-5zj2q"] Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.616358 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.636498 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.657251 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.660784 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q74f5"] Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.675128 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.699114 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.714561 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.735710 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.756927 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 31 03:50:27 crc kubenswrapper[4667]: E0131 03:50:27.768875 4667 projected.go:288] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 31 03:50:27 crc kubenswrapper[4667]: E0131 03:50:27.768921 4667 projected.go:194] Error preparing data for projected volume kube-api-access-xc88x for pod openshift-apiserver/apiserver-76f77b778f-7txvq: failed to sync configmap cache: timed out waiting for the condition Jan 31 03:50:27 crc kubenswrapper[4667]: E0131 03:50:27.769021 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cbf59768-adfb-48f6-b68b-ebf1675f1807-kube-api-access-xc88x podName:cbf59768-adfb-48f6-b68b-ebf1675f1807 nodeName:}" failed. No retries permitted until 2026-01-31 03:50:28.268997444 +0000 UTC m=+151.785332743 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xc88x" (UniqueName: "kubernetes.io/projected/cbf59768-adfb-48f6-b68b-ebf1675f1807-kube-api-access-xc88x") pod "apiserver-76f77b778f-7txvq" (UID: "cbf59768-adfb-48f6-b68b-ebf1675f1807") : failed to sync configmap cache: timed out waiting for the condition Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.774791 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 31 03:50:27 crc kubenswrapper[4667]: E0131 03:50:27.788799 4667 projected.go:288] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 31 03:50:27 crc kubenswrapper[4667]: E0131 03:50:27.788840 4667 projected.go:194] Error preparing data for projected volume kube-api-access-rbvlq for pod openshift-authentication-operator/authentication-operator-69f744f599-xhcb5: failed to sync configmap cache: timed out waiting for the condition Jan 31 03:50:27 crc kubenswrapper[4667]: E0131 03:50:27.788912 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3a49a8a9-82b6-4374-a43a-224f2f9e14a4-kube-api-access-rbvlq podName:3a49a8a9-82b6-4374-a43a-224f2f9e14a4 nodeName:}" failed. No retries permitted until 2026-01-31 03:50:28.288895031 +0000 UTC m=+151.805230330 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rbvlq" (UniqueName: "kubernetes.io/projected/3a49a8a9-82b6-4374-a43a-224f2f9e14a4-kube-api-access-rbvlq") pod "authentication-operator-69f744f599-xhcb5" (UID: "3a49a8a9-82b6-4374-a43a-224f2f9e14a4") : failed to sync configmap cache: timed out waiting for the condition Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.816847 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d92f\" (UniqueName: \"kubernetes.io/projected/c2cef73f-5410-499e-ae70-491c866c1b48-kube-api-access-6d92f\") pod \"ingress-operator-5b745b69d9-bclzx\" (UID: \"c2cef73f-5410-499e-ae70-491c866c1b48\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bclzx" Jan 31 03:50:27 crc kubenswrapper[4667]: E0131 03:50:27.827633 4667 projected.go:288] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 31 03:50:27 crc kubenswrapper[4667]: E0131 03:50:27.827709 4667 projected.go:194] Error preparing data for projected volume kube-api-access-5phmx for pod openshift-config-operator/openshift-config-operator-7777fb866f-ms8lf: failed to sync configmap cache: timed out waiting for the condition Jan 31 03:50:27 crc kubenswrapper[4667]: E0131 03:50:27.827813 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9af91113-a315-4416-a1f2-6566c16278cf-kube-api-access-5phmx podName:9af91113-a315-4416-a1f2-6566c16278cf nodeName:}" failed. No retries permitted until 2026-01-31 03:50:28.327789813 +0000 UTC m=+151.844125112 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5phmx" (UniqueName: "kubernetes.io/projected/9af91113-a315-4416-a1f2-6566c16278cf-kube-api-access-5phmx") pod "openshift-config-operator-7777fb866f-ms8lf" (UID: "9af91113-a315-4416-a1f2-6566c16278cf") : failed to sync configmap cache: timed out waiting for the condition Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.842854 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9h9d\" (UniqueName: \"kubernetes.io/projected/0400a903-d02e-41b4-99f3-3c7b57744839-kube-api-access-d9h9d\") pod \"dns-operator-744455d44c-sbbxx\" (UID: \"0400a903-d02e-41b4-99f3-3c7b57744839\") " pod="openshift-dns-operator/dns-operator-744455d44c-sbbxx" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.853662 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlphk\" (UniqueName: \"kubernetes.io/projected/21469e62-0345-41f0-a07b-eac67df38faf-kube-api-access-tlphk\") pod \"oauth-openshift-558db77b4-dmtcm\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.870452 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c2cef73f-5410-499e-ae70-491c866c1b48-bound-sa-token\") pod \"ingress-operator-5b745b69d9-bclzx\" (UID: \"c2cef73f-5410-499e-ae70-491c866c1b48\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bclzx" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.874130 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.894660 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.918739 4667 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.930174 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.936906 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.955391 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.975338 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 31 03:50:27 crc kubenswrapper[4667]: I0131 03:50:27.994613 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.012742 4667 request.go:700] Waited for 1.919037555s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.013693 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bclzx" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.016221 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.078002 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.079465 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.080077 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 31 03:50:28 crc kubenswrapper[4667]: E0131 03:50:28.086038 4667 projected.go:288] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.086738 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-sbbxx" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.112734 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jkdkh" event={"ID":"a540fba9-faa8-4cfb-b907-4e7099429e30","Type":"ContainerStarted","Data":"d2bfaf69035fa33f53fe64bec3361e19a952b6baeedce0aa2f850db566ad12ff"} Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.112777 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jkdkh" event={"ID":"a540fba9-faa8-4cfb-b907-4e7099429e30","Type":"ContainerStarted","Data":"a221ec0d1b44ed125ab87c7f2e6ac49acd5142424cb064180da9cd2b6d6ef6e9"} Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.119560 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.130847 4667 generic.go:334] "Generic (PLEG): container finished" podID="4dd56584-ddc5-48e9-be73-9758dca8dddf" containerID="c16ff7c29578b5004cade41df6abf7550005a46e2e1934e3f572509f3670229a" exitCode=0 Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.131553 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4rfkg" event={"ID":"4dd56584-ddc5-48e9-be73-9758dca8dddf","Type":"ContainerDied","Data":"c16ff7c29578b5004cade41df6abf7550005a46e2e1934e3f572509f3670229a"} Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.131584 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4rfkg" event={"ID":"4dd56584-ddc5-48e9-be73-9758dca8dddf","Type":"ContainerStarted","Data":"c11abf214b8fb46d31e6be480df0b4d285bbd48edec7dd4a33fddc636ef5b134"} Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.140225 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.142409 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5nl2" event={"ID":"74c3828b-92ba-4a4a-bfeb-d5d02facdbdb","Type":"ContainerStarted","Data":"4936970391d4c9923966a78873ffb18137cf2d816fb34c1589649063d522e4e5"} Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.142441 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5nl2" event={"ID":"74c3828b-92ba-4a4a-bfeb-d5d02facdbdb","Type":"ContainerStarted","Data":"53a2efa49784ac503e56b29ef85a4caa7a4ec842dba8f8accc2c3ddfa4ea08c9"} Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.142457 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5nl2" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.145182 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-gf8vs" event={"ID":"baefc4bd-d927-4cf9-94af-eab8b042b3ca","Type":"ContainerStarted","Data":"e64ef9d54d89ea09eae175e003404c5cdde2641ca34119c86752b41b57857afb"} Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.145215 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-gf8vs" event={"ID":"baefc4bd-d927-4cf9-94af-eab8b042b3ca","Type":"ContainerStarted","Data":"615432ec16782ca51a3e99ce4cc1389e6f5d09823ff24cee0fb38b69703b830e"} Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.145228 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-gf8vs" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.149139 4667 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-m5nl2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.149189 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5nl2" podUID="74c3828b-92ba-4a4a-bfeb-d5d02facdbdb" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.150905 4667 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-gf8vs container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.150929 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-gf8vs" podUID="baefc4bd-d927-4cf9-94af-eab8b042b3ca" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.155103 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-5zj2q" event={"ID":"745b1e30-1f16-4539-847b-88db36eb6d4b","Type":"ContainerStarted","Data":"ba7dfe882d4d31322dc090f471f700135fc8db7137b941d18b2da9d08d8b0c49"} Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.155145 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-5zj2q" event={"ID":"745b1e30-1f16-4539-847b-88db36eb6d4b","Type":"ContainerStarted","Data":"237c6479e5448f424e92f7530c748efc401ef7b22793370cffe94b8489fc63c1"} Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.155750 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-5zj2q" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.156001 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.159087 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pfnkq" event={"ID":"1ddc8dd4-60ac-4d28-8cab-1139c300a29c","Type":"ContainerStarted","Data":"7a8f56cd3cecb21e1549928af515f04f296aa1bb12a64c5b9619ba022f9bf32c"} Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.159121 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pfnkq" event={"ID":"1ddc8dd4-60ac-4d28-8cab-1139c300a29c","Type":"ContainerStarted","Data":"c04ea747f00ec01f36ee1f9b91de7015251f4ccf6b2f9313faae64aa98484d4d"} Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.159134 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pfnkq" event={"ID":"1ddc8dd4-60ac-4d28-8cab-1139c300a29c","Type":"ContainerStarted","Data":"61ea183d4560a3fd23c138fbe368fe28868487d777ece3b3281fff694176f0bf"} Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.163589 4667 patch_prober.go:28] interesting pod/downloads-7954f5f757-5zj2q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" start-of-body= Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.163617 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5zj2q" podUID="745b1e30-1f16-4539-847b-88db36eb6d4b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.164541 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q74f5" event={"ID":"9a88ec07-7527-4e9e-ad37-a2ad408658a6","Type":"ContainerStarted","Data":"cabddfd92c78fb81fa6e2d58b2f8316e1ecd480f0d377f9b55b56cf98a2caf48"} Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.164561 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q74f5" event={"ID":"9a88ec07-7527-4e9e-ad37-a2ad408658a6","Type":"ContainerStarted","Data":"3d265924bc1855924eef82a832f10f96232c9a350d41cd0991c573f1b31cafcb"} Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.167126 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-zpjcj" event={"ID":"83d090b3-311a-4b89-aa7d-de1ca0b237d6","Type":"ContainerStarted","Data":"1fab4e2dc965faf927feefeeafc39fb40b5d2acd515aeaca891aea96fb1444cc"} Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.167449 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-zpjcj" event={"ID":"83d090b3-311a-4b89-aa7d-de1ca0b237d6","Type":"ContainerStarted","Data":"8f34eede4e4eadf7ca60cf5944da9161dd4c7e4829c1a25c7353a72bc8c9d634"} Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.167460 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-zpjcj" event={"ID":"83d090b3-311a-4b89-aa7d-de1ca0b237d6","Type":"ContainerStarted","Data":"0b726707873de72ecadeb63114134ccdf08e104a188f707b55302389a2a49dfb"} Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.201539 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5c9ccc10-6c02-463f-b2fd-a89fcacdb598-trusted-ca\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.201571 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5c9ccc10-6c02-463f-b2fd-a89fcacdb598-bound-sa-token\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.201589 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e7b15a28-215c-46ab-b4c7-a46f5e8205ae-etcd-service-ca\") pod \"etcd-operator-b45778765-4qz94\" (UID: \"e7b15a28-215c-46ab-b4c7-a46f5e8205ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4qz94" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.201618 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5c9ccc10-6c02-463f-b2fd-a89fcacdb598-ca-trust-extracted\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.201635 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e7b15a28-215c-46ab-b4c7-a46f5e8205ae-etcd-ca\") pod \"etcd-operator-b45778765-4qz94\" (UID: \"e7b15a28-215c-46ab-b4c7-a46f5e8205ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4qz94" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.201660 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/86d8d0d4-69ef-439d-b516-01b8d02cf5ce-trusted-ca\") pod \"console-operator-58897d9998-nnvtr\" (UID: \"86d8d0d4-69ef-439d-b516-01b8d02cf5ce\") " pod="openshift-console-operator/console-operator-58897d9998-nnvtr" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.201709 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86d8d0d4-69ef-439d-b516-01b8d02cf5ce-serving-cert\") pod \"console-operator-58897d9998-nnvtr\" (UID: \"86d8d0d4-69ef-439d-b516-01b8d02cf5ce\") " pod="openshift-console-operator/console-operator-58897d9998-nnvtr" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.201783 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4f281370-6419-4dfb-b21f-9d1c9c7eddaa-console-config\") pod \"console-f9d7485db-wjsth\" (UID: \"4f281370-6419-4dfb-b21f-9d1c9c7eddaa\") " pod="openshift-console/console-f9d7485db-wjsth" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.201825 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7b15a28-215c-46ab-b4c7-a46f5e8205ae-serving-cert\") pod \"etcd-operator-b45778765-4qz94\" (UID: \"e7b15a28-215c-46ab-b4c7-a46f5e8205ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4qz94" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.201844 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f281370-6419-4dfb-b21f-9d1c9c7eddaa-trusted-ca-bundle\") pod \"console-f9d7485db-wjsth\" (UID: \"4f281370-6419-4dfb-b21f-9d1c9c7eddaa\") " pod="openshift-console/console-f9d7485db-wjsth" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.201861 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4f281370-6419-4dfb-b21f-9d1c9c7eddaa-console-oauth-config\") pod \"console-f9d7485db-wjsth\" (UID: \"4f281370-6419-4dfb-b21f-9d1c9c7eddaa\") " pod="openshift-console/console-f9d7485db-wjsth" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.201925 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5c9ccc10-6c02-463f-b2fd-a89fcacdb598-installation-pull-secrets\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.203429 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4f281370-6419-4dfb-b21f-9d1c9c7eddaa-oauth-serving-cert\") pod \"console-f9d7485db-wjsth\" (UID: \"4f281370-6419-4dfb-b21f-9d1c9c7eddaa\") " pod="openshift-console/console-f9d7485db-wjsth" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.203486 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5c9ccc10-6c02-463f-b2fd-a89fcacdb598-registry-certificates\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.203507 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsnvh\" (UniqueName: \"kubernetes.io/projected/5c9ccc10-6c02-463f-b2fd-a89fcacdb598-kube-api-access-gsnvh\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.203523 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86d8d0d4-69ef-439d-b516-01b8d02cf5ce-config\") pod \"console-operator-58897d9998-nnvtr\" (UID: \"86d8d0d4-69ef-439d-b516-01b8d02cf5ce\") " pod="openshift-console-operator/console-operator-58897d9998-nnvtr" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.203555 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkq9f\" (UniqueName: \"kubernetes.io/projected/86d8d0d4-69ef-439d-b516-01b8d02cf5ce-kube-api-access-tkq9f\") pod \"console-operator-58897d9998-nnvtr\" (UID: \"86d8d0d4-69ef-439d-b516-01b8d02cf5ce\") " pod="openshift-console-operator/console-operator-58897d9998-nnvtr" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.203570 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4f281370-6419-4dfb-b21f-9d1c9c7eddaa-console-serving-cert\") pod \"console-f9d7485db-wjsth\" (UID: \"4f281370-6419-4dfb-b21f-9d1c9c7eddaa\") " pod="openshift-console/console-f9d7485db-wjsth" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.203621 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dv2f\" (UniqueName: \"kubernetes.io/projected/4f281370-6419-4dfb-b21f-9d1c9c7eddaa-kube-api-access-8dv2f\") pod \"console-f9d7485db-wjsth\" (UID: \"4f281370-6419-4dfb-b21f-9d1c9c7eddaa\") " pod="openshift-console/console-f9d7485db-wjsth" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.203655 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e7b15a28-215c-46ab-b4c7-a46f5e8205ae-etcd-client\") pod \"etcd-operator-b45778765-4qz94\" (UID: \"e7b15a28-215c-46ab-b4c7-a46f5e8205ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4qz94" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.203697 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7b15a28-215c-46ab-b4c7-a46f5e8205ae-config\") pod \"etcd-operator-b45778765-4qz94\" (UID: \"e7b15a28-215c-46ab-b4c7-a46f5e8205ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4qz94" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.203726 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f8g8\" (UniqueName: \"kubernetes.io/projected/e7b15a28-215c-46ab-b4c7-a46f5e8205ae-kube-api-access-9f8g8\") pod \"etcd-operator-b45778765-4qz94\" (UID: \"e7b15a28-215c-46ab-b4c7-a46f5e8205ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4qz94" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.203750 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5c9ccc10-6c02-463f-b2fd-a89fcacdb598-registry-tls\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.203791 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.203807 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4f281370-6419-4dfb-b21f-9d1c9c7eddaa-service-ca\") pod \"console-f9d7485db-wjsth\" (UID: \"4f281370-6419-4dfb-b21f-9d1c9c7eddaa\") " pod="openshift-console/console-f9d7485db-wjsth" Jan 31 03:50:28 crc kubenswrapper[4667]: E0131 03:50:28.211259 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:28.711235997 +0000 UTC m=+152.227571296 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.216341 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.224449 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.228033 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbf59768-adfb-48f6-b68b-ebf1675f1807-config\") pod \"apiserver-76f77b778f-7txvq\" (UID: \"cbf59768-adfb-48f6-b68b-ebf1675f1807\") " pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.245906 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.246224 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dmtcm"] Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.273955 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.275082 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 31 03:50:28 crc kubenswrapper[4667]: E0131 03:50:28.276845 4667 projected.go:194] Error preparing data for projected volume kube-api-access-4ksbp for pod openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mt595: failed to sync configmap cache: timed out waiting for the condition Jan 31 03:50:28 crc kubenswrapper[4667]: E0131 03:50:28.277418 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/402d584b-6176-4cee-8e27-cc233b48feec-kube-api-access-4ksbp podName:402d584b-6176-4cee-8e27-cc233b48feec nodeName:}" failed. No retries permitted until 2026-01-31 03:50:28.777393748 +0000 UTC m=+152.293729057 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4ksbp" (UniqueName: "kubernetes.io/projected/402d584b-6176-4cee-8e27-cc233b48feec-kube-api-access-4ksbp") pod "openshift-apiserver-operator-796bbdcf4f-mt595" (UID: "402d584b-6176-4cee-8e27-cc233b48feec") : failed to sync configmap cache: timed out waiting for the condition Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.289400 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbf59768-adfb-48f6-b68b-ebf1675f1807-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7txvq\" (UID: \"cbf59768-adfb-48f6-b68b-ebf1675f1807\") " pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.297388 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.304648 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.304801 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e7b15a28-215c-46ab-b4c7-a46f5e8205ae-etcd-service-ca\") pod \"etcd-operator-b45778765-4qz94\" (UID: \"e7b15a28-215c-46ab-b4c7-a46f5e8205ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4qz94" Jan 31 03:50:28 crc kubenswrapper[4667]: E0131 03:50:28.304917 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:28.804881013 +0000 UTC m=+152.321216312 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.305027 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5c9ccc10-6c02-463f-b2fd-a89fcacdb598-trusted-ca\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.305066 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5c9ccc10-6c02-463f-b2fd-a89fcacdb598-bound-sa-token\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.305092 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5c9ccc10-6c02-463f-b2fd-a89fcacdb598-ca-trust-extracted\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.305151 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nnqh\" (UniqueName: \"kubernetes.io/projected/145e5e24-2f94-48b2-be05-b08dbbb09312-kube-api-access-9nnqh\") pod \"collect-profiles-29497185-9b5gp\" (UID: \"145e5e24-2f94-48b2-be05-b08dbbb09312\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497185-9b5gp" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.305196 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86d8d0d4-69ef-439d-b516-01b8d02cf5ce-serving-cert\") pod \"console-operator-58897d9998-nnvtr\" (UID: \"86d8d0d4-69ef-439d-b516-01b8d02cf5ce\") " pod="openshift-console-operator/console-operator-58897d9998-nnvtr" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.305221 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc88x\" (UniqueName: \"kubernetes.io/projected/cbf59768-adfb-48f6-b68b-ebf1675f1807-kube-api-access-xc88x\") pod \"apiserver-76f77b778f-7txvq\" (UID: \"cbf59768-adfb-48f6-b68b-ebf1675f1807\") " pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.305259 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/56542c6f-259e-42a8-b62a-ea0ac38af319-webhook-cert\") pod \"packageserver-d55dfcdfc-67p2b\" (UID: \"56542c6f-259e-42a8-b62a-ea0ac38af319\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-67p2b" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.305284 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edfe2658-7a40-43db-a17b-72d1ea1fde3d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-lcnlf\" (UID: \"edfe2658-7a40-43db-a17b-72d1ea1fde3d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lcnlf" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.305307 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbvlq\" (UniqueName: \"kubernetes.io/projected/3a49a8a9-82b6-4374-a43a-224f2f9e14a4-kube-api-access-rbvlq\") pod \"authentication-operator-69f744f599-xhcb5\" (UID: \"3a49a8a9-82b6-4374-a43a-224f2f9e14a4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xhcb5" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.305334 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlkfw\" (UniqueName: \"kubernetes.io/projected/18e6f9b3-4be1-4a96-9a05-f42b40f4c2fe-kube-api-access-rlkfw\") pod \"service-ca-9c57cc56f-dndtw\" (UID: \"18e6f9b3-4be1-4a96-9a05-f42b40f4c2fe\") " pod="openshift-service-ca/service-ca-9c57cc56f-dndtw" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.305359 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbbace8c-06bb-4b50-a132-a681482dc9e5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-stnvq\" (UID: \"dbbace8c-06bb-4b50-a132-a681482dc9e5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-stnvq" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.305381 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bc826929-768c-408f-a0f9-74bd29154340-profile-collector-cert\") pod \"catalog-operator-68c6474976-nrjrr\" (UID: \"bc826929-768c-408f-a0f9-74bd29154340\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nrjrr" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.305406 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd4qt\" (UniqueName: \"kubernetes.io/projected/9ce6e7c4-30b6-4812-8b50-bf81a13f7b9d-kube-api-access-hd4qt\") pod \"package-server-manager-789f6589d5-vsb97\" (UID: \"9ce6e7c4-30b6-4812-8b50-bf81a13f7b9d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vsb97" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.305437 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d5757a8b-5d94-46d5-bc18-4a6c757a9ff2-proxy-tls\") pod \"machine-config-operator-74547568cd-rs9cq\" (UID: \"d5757a8b-5d94-46d5-bc18-4a6c757a9ff2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rs9cq" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.305456 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v87mm\" (UniqueName: \"kubernetes.io/projected/364c92fa-bb5a-428d-a999-eb6415d3f307-kube-api-access-v87mm\") pod \"service-ca-operator-777779d784-sjv9d\" (UID: \"364c92fa-bb5a-428d-a999-eb6415d3f307\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-sjv9d" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.305481 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed8b36d0-771d-48bb-9393-db864ff8ff84-service-ca-bundle\") pod \"router-default-5444994796-kvgs8\" (UID: \"ed8b36d0-771d-48bb-9393-db864ff8ff84\") " pod="openshift-ingress/router-default-5444994796-kvgs8" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.305506 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4f281370-6419-4dfb-b21f-9d1c9c7eddaa-console-oauth-config\") pod \"console-f9d7485db-wjsth\" (UID: \"4f281370-6419-4dfb-b21f-9d1c9c7eddaa\") " pod="openshift-console/console-f9d7485db-wjsth" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.305530 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/889f5458-9c03-4be0-8f99-848f68c3ecc8-config\") pod \"kube-controller-manager-operator-78b949d7b-9pdkp\" (UID: \"889f5458-9c03-4be0-8f99-848f68c3ecc8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9pdkp" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.305563 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d426c096-b6d9-4696-8066-2b9ec75356af-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7pbrg\" (UID: \"d426c096-b6d9-4696-8066-2b9ec75356af\") " pod="openshift-marketplace/marketplace-operator-79b997595-7pbrg" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.305580 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs2h4\" (UniqueName: \"kubernetes.io/projected/bc826929-768c-408f-a0f9-74bd29154340-kube-api-access-gs2h4\") pod \"catalog-operator-68c6474976-nrjrr\" (UID: \"bc826929-768c-408f-a0f9-74bd29154340\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nrjrr" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.305599 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/3dc6c6a9-3322-42db-a408-9a03c18a7531-mountpoint-dir\") pod \"csi-hostpathplugin-82j72\" (UID: \"3dc6c6a9-3322-42db-a408-9a03c18a7531\") " pod="hostpath-provisioner/csi-hostpathplugin-82j72" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.305615 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cmkr\" (UniqueName: \"kubernetes.io/projected/3dc6c6a9-3322-42db-a408-9a03c18a7531-kube-api-access-9cmkr\") pod \"csi-hostpathplugin-82j72\" (UID: \"3dc6c6a9-3322-42db-a408-9a03c18a7531\") " pod="hostpath-provisioner/csi-hostpathplugin-82j72" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.305638 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9356361a-5000-468f-bb1d-17460cd2e9dc-proxy-tls\") pod \"machine-config-controller-84d6567774-j2rkz\" (UID: \"9356361a-5000-468f-bb1d-17460cd2e9dc\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j2rkz" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.305668 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-527dl\" (UniqueName: \"kubernetes.io/projected/d426c096-b6d9-4696-8066-2b9ec75356af-kube-api-access-527dl\") pod \"marketplace-operator-79b997595-7pbrg\" (UID: \"d426c096-b6d9-4696-8066-2b9ec75356af\") " pod="openshift-marketplace/marketplace-operator-79b997595-7pbrg" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.305688 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26736509-60d6-4b4e-94b2-1fef29fa0c91-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-nvj6q\" (UID: \"26736509-60d6-4b4e-94b2-1fef29fa0c91\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nvj6q" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.305661 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5c9ccc10-6c02-463f-b2fd-a89fcacdb598-ca-trust-extracted\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.306073 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npsrq\" (UniqueName: \"kubernetes.io/projected/dbbace8c-06bb-4b50-a132-a681482dc9e5-kube-api-access-npsrq\") pod \"control-plane-machine-set-operator-78cbb6b69f-stnvq\" (UID: \"dbbace8c-06bb-4b50-a132-a681482dc9e5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-stnvq" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.306228 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e7b15a28-215c-46ab-b4c7-a46f5e8205ae-etcd-service-ca\") pod \"etcd-operator-b45778765-4qz94\" (UID: \"e7b15a28-215c-46ab-b4c7-a46f5e8205ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4qz94" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.306344 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsnvh\" (UniqueName: \"kubernetes.io/projected/5c9ccc10-6c02-463f-b2fd-a89fcacdb598-kube-api-access-gsnvh\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.306433 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ed8b36d0-771d-48bb-9393-db864ff8ff84-default-certificate\") pod \"router-default-5444994796-kvgs8\" (UID: \"ed8b36d0-771d-48bb-9393-db864ff8ff84\") " pod="openshift-ingress/router-default-5444994796-kvgs8" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.306458 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkq9f\" (UniqueName: \"kubernetes.io/projected/86d8d0d4-69ef-439d-b516-01b8d02cf5ce-kube-api-access-tkq9f\") pod \"console-operator-58897d9998-nnvtr\" (UID: \"86d8d0d4-69ef-439d-b516-01b8d02cf5ce\") " pod="openshift-console-operator/console-operator-58897d9998-nnvtr" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.306479 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4f281370-6419-4dfb-b21f-9d1c9c7eddaa-console-serving-cert\") pod \"console-f9d7485db-wjsth\" (UID: \"4f281370-6419-4dfb-b21f-9d1c9c7eddaa\") " pod="openshift-console/console-f9d7485db-wjsth" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.306512 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/145e5e24-2f94-48b2-be05-b08dbbb09312-config-volume\") pod \"collect-profiles-29497185-9b5gp\" (UID: \"145e5e24-2f94-48b2-be05-b08dbbb09312\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497185-9b5gp" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.306531 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bc826929-768c-408f-a0f9-74bd29154340-srv-cert\") pod \"catalog-operator-68c6474976-nrjrr\" (UID: \"bc826929-768c-408f-a0f9-74bd29154340\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nrjrr" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.307656 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/3dc6c6a9-3322-42db-a408-9a03c18a7531-plugins-dir\") pod \"csi-hostpathplugin-82j72\" (UID: \"3dc6c6a9-3322-42db-a408-9a03c18a7531\") " pod="hostpath-provisioner/csi-hostpathplugin-82j72" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.307769 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7b15a28-215c-46ab-b4c7-a46f5e8205ae-config\") pod \"etcd-operator-b45778765-4qz94\" (UID: \"e7b15a28-215c-46ab-b4c7-a46f5e8205ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4qz94" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.307795 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcjss\" (UniqueName: \"kubernetes.io/projected/d5757a8b-5d94-46d5-bc18-4a6c757a9ff2-kube-api-access-hcjss\") pod \"machine-config-operator-74547568cd-rs9cq\" (UID: \"d5757a8b-5d94-46d5-bc18-4a6c757a9ff2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rs9cq" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.307860 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d44cf093-fd97-4adf-bdad-2c3fdb4157d7-metrics-tls\") pod \"dns-default-kfr9j\" (UID: \"d44cf093-fd97-4adf-bdad-2c3fdb4157d7\") " pod="openshift-dns/dns-default-kfr9j" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.307908 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f8g8\" (UniqueName: \"kubernetes.io/projected/e7b15a28-215c-46ab-b4c7-a46f5e8205ae-kube-api-access-9f8g8\") pod \"etcd-operator-b45778765-4qz94\" (UID: \"e7b15a28-215c-46ab-b4c7-a46f5e8205ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4qz94" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.307936 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c33fc16-1215-438a-93e6-840ca5444e75-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-s9tvt\" (UID: \"0c33fc16-1215-438a-93e6-840ca5444e75\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s9tvt" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.308070 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d426c096-b6d9-4696-8066-2b9ec75356af-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7pbrg\" (UID: \"d426c096-b6d9-4696-8066-2b9ec75356af\") " pod="openshift-marketplace/marketplace-operator-79b997595-7pbrg" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.308118 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4f281370-6419-4dfb-b21f-9d1c9c7eddaa-service-ca\") pod \"console-f9d7485db-wjsth\" (UID: \"4f281370-6419-4dfb-b21f-9d1c9c7eddaa\") " pod="openshift-console/console-f9d7485db-wjsth" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.308137 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5608a012-09d1-4c57-9371-715625086e4d-cert\") pod \"ingress-canary-xf9cn\" (UID: \"5608a012-09d1-4c57-9371-715625086e4d\") " pod="openshift-ingress-canary/ingress-canary-xf9cn" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.308824 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.308904 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ed8b36d0-771d-48bb-9393-db864ff8ff84-stats-auth\") pod \"router-default-5444994796-kvgs8\" (UID: \"ed8b36d0-771d-48bb-9393-db864ff8ff84\") " pod="openshift-ingress/router-default-5444994796-kvgs8" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.308939 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7b15a28-215c-46ab-b4c7-a46f5e8205ae-config\") pod \"etcd-operator-b45778765-4qz94\" (UID: \"e7b15a28-215c-46ab-b4c7-a46f5e8205ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4qz94" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.309657 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4f281370-6419-4dfb-b21f-9d1c9c7eddaa-service-ca\") pod \"console-f9d7485db-wjsth\" (UID: \"4f281370-6419-4dfb-b21f-9d1c9c7eddaa\") " pod="openshift-console/console-f9d7485db-wjsth" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.309792 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26736509-60d6-4b4e-94b2-1fef29fa0c91-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-nvj6q\" (UID: \"26736509-60d6-4b4e-94b2-1fef29fa0c91\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nvj6q" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.309828 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d44cf093-fd97-4adf-bdad-2c3fdb4157d7-config-volume\") pod \"dns-default-kfr9j\" (UID: \"d44cf093-fd97-4adf-bdad-2c3fdb4157d7\") " pod="openshift-dns/dns-default-kfr9j" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.309870 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3dc6c6a9-3322-42db-a408-9a03c18a7531-socket-dir\") pod \"csi-hostpathplugin-82j72\" (UID: \"3dc6c6a9-3322-42db-a408-9a03c18a7531\") " pod="hostpath-provisioner/csi-hostpathplugin-82j72" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.309900 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c33fc16-1215-438a-93e6-840ca5444e75-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-s9tvt\" (UID: \"0c33fc16-1215-438a-93e6-840ca5444e75\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s9tvt" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.309921 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/889f5458-9c03-4be0-8f99-848f68c3ecc8-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-9pdkp\" (UID: \"889f5458-9c03-4be0-8f99-848f68c3ecc8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9pdkp" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.310018 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e7b15a28-215c-46ab-b4c7-a46f5e8205ae-etcd-ca\") pod \"etcd-operator-b45778765-4qz94\" (UID: \"e7b15a28-215c-46ab-b4c7-a46f5e8205ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4qz94" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.311121 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e7b15a28-215c-46ab-b4c7-a46f5e8205ae-etcd-ca\") pod \"etcd-operator-b45778765-4qz94\" (UID: \"e7b15a28-215c-46ab-b4c7-a46f5e8205ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4qz94" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.311176 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/86d8d0d4-69ef-439d-b516-01b8d02cf5ce-trusted-ca\") pod \"console-operator-58897d9998-nnvtr\" (UID: \"86d8d0d4-69ef-439d-b516-01b8d02cf5ce\") " pod="openshift-console-operator/console-operator-58897d9998-nnvtr" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.311243 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/18e6f9b3-4be1-4a96-9a05-f42b40f4c2fe-signing-cabundle\") pod \"service-ca-9c57cc56f-dndtw\" (UID: \"18e6f9b3-4be1-4a96-9a05-f42b40f4c2fe\") " pod="openshift-service-ca/service-ca-9c57cc56f-dndtw" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.312708 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/86d8d0d4-69ef-439d-b516-01b8d02cf5ce-trusted-ca\") pod \"console-operator-58897d9998-nnvtr\" (UID: \"86d8d0d4-69ef-439d-b516-01b8d02cf5ce\") " pod="openshift-console-operator/console-operator-58897d9998-nnvtr" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.312747 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/3dc6c6a9-3322-42db-a408-9a03c18a7531-csi-data-dir\") pod \"csi-hostpathplugin-82j72\" (UID: \"3dc6c6a9-3322-42db-a408-9a03c18a7531\") " pod="hostpath-provisioner/csi-hostpathplugin-82j72" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.312783 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed8b36d0-771d-48bb-9393-db864ff8ff84-metrics-certs\") pod \"router-default-5444994796-kvgs8\" (UID: \"ed8b36d0-771d-48bb-9393-db864ff8ff84\") " pod="openshift-ingress/router-default-5444994796-kvgs8" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.313061 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4f281370-6419-4dfb-b21f-9d1c9c7eddaa-console-config\") pod \"console-f9d7485db-wjsth\" (UID: \"4f281370-6419-4dfb-b21f-9d1c9c7eddaa\") " pod="openshift-console/console-f9d7485db-wjsth" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.313094 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/031121b1-b221-434c-91c0-d9b433cd6e7c-srv-cert\") pod \"olm-operator-6b444d44fb-97sgp\" (UID: \"031121b1-b221-434c-91c0-d9b433cd6e7c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-97sgp" Jan 31 03:50:28 crc kubenswrapper[4667]: E0131 03:50:28.313290 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:28.813264281 +0000 UTC m=+152.329599580 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.316235 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfxwj\" (UniqueName: \"kubernetes.io/projected/031121b1-b221-434c-91c0-d9b433cd6e7c-kube-api-access-dfxwj\") pod \"olm-operator-6b444d44fb-97sgp\" (UID: \"031121b1-b221-434c-91c0-d9b433cd6e7c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-97sgp" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.316323 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxdfh\" (UniqueName: \"kubernetes.io/projected/ed8b36d0-771d-48bb-9393-db864ff8ff84-kube-api-access-dxdfh\") pod \"router-default-5444994796-kvgs8\" (UID: \"ed8b36d0-771d-48bb-9393-db864ff8ff84\") " pod="openshift-ingress/router-default-5444994796-kvgs8" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.316346 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d5757a8b-5d94-46d5-bc18-4a6c757a9ff2-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rs9cq\" (UID: \"d5757a8b-5d94-46d5-bc18-4a6c757a9ff2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rs9cq" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.316407 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7b15a28-215c-46ab-b4c7-a46f5e8205ae-serving-cert\") pod \"etcd-operator-b45778765-4qz94\" (UID: \"e7b15a28-215c-46ab-b4c7-a46f5e8205ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4qz94" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.316492 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f281370-6419-4dfb-b21f-9d1c9c7eddaa-trusted-ca-bundle\") pod \"console-f9d7485db-wjsth\" (UID: \"4f281370-6419-4dfb-b21f-9d1c9c7eddaa\") " pod="openshift-console/console-f9d7485db-wjsth" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.316552 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/9ce6e7c4-30b6-4812-8b50-bf81a13f7b9d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-vsb97\" (UID: \"9ce6e7c4-30b6-4812-8b50-bf81a13f7b9d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vsb97" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.316579 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/83ec5b09-78b8-44c8-8530-2375417e0c97-node-bootstrap-token\") pod \"machine-config-server-pgpmm\" (UID: \"83ec5b09-78b8-44c8-8530-2375417e0c97\") " pod="openshift-machine-config-operator/machine-config-server-pgpmm" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.317515 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.318365 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/56542c6f-259e-42a8-b62a-ea0ac38af319-apiservice-cert\") pod \"packageserver-d55dfcdfc-67p2b\" (UID: \"56542c6f-259e-42a8-b62a-ea0ac38af319\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-67p2b" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.318394 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/364c92fa-bb5a-428d-a999-eb6415d3f307-config\") pod \"service-ca-operator-777779d784-sjv9d\" (UID: \"364c92fa-bb5a-428d-a999-eb6415d3f307\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-sjv9d" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.318435 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5c9ccc10-6c02-463f-b2fd-a89fcacdb598-installation-pull-secrets\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.318520 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4f281370-6419-4dfb-b21f-9d1c9c7eddaa-oauth-serving-cert\") pod \"console-f9d7485db-wjsth\" (UID: \"4f281370-6419-4dfb-b21f-9d1c9c7eddaa\") " pod="openshift-console/console-f9d7485db-wjsth" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.318547 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89f42\" (UniqueName: \"kubernetes.io/projected/26736509-60d6-4b4e-94b2-1fef29fa0c91-kube-api-access-89f42\") pod \"kube-storage-version-migrator-operator-b67b599dd-nvj6q\" (UID: \"26736509-60d6-4b4e-94b2-1fef29fa0c91\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nvj6q" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.319238 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j2sj\" (UniqueName: \"kubernetes.io/projected/d44cf093-fd97-4adf-bdad-2c3fdb4157d7-kube-api-access-5j2sj\") pod \"dns-default-kfr9j\" (UID: \"d44cf093-fd97-4adf-bdad-2c3fdb4157d7\") " pod="openshift-dns/dns-default-kfr9j" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.319314 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6sz5\" (UniqueName: \"kubernetes.io/projected/f18d0f3d-32c1-40d3-99da-969208958cf4-kube-api-access-r6sz5\") pod \"multus-admission-controller-857f4d67dd-xzsnp\" (UID: \"f18d0f3d-32c1-40d3-99da-969208958cf4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xzsnp" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.319351 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/889f5458-9c03-4be0-8f99-848f68c3ecc8-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-9pdkp\" (UID: \"889f5458-9c03-4be0-8f99-848f68c3ecc8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9pdkp" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.319389 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87nnm\" (UniqueName: \"kubernetes.io/projected/5236f7ce-22c8-4283-9046-72fd92d2b7a7-kube-api-access-87nnm\") pod \"migrator-59844c95c7-czqrm\" (UID: \"5236f7ce-22c8-4283-9046-72fd92d2b7a7\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-czqrm" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.319422 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5c9ccc10-6c02-463f-b2fd-a89fcacdb598-registry-certificates\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.319443 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edfe2658-7a40-43db-a17b-72d1ea1fde3d-config\") pod \"kube-apiserver-operator-766d6c64bb-lcnlf\" (UID: \"edfe2658-7a40-43db-a17b-72d1ea1fde3d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lcnlf" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.319547 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86d8d0d4-69ef-439d-b516-01b8d02cf5ce-config\") pod \"console-operator-58897d9998-nnvtr\" (UID: \"86d8d0d4-69ef-439d-b516-01b8d02cf5ce\") " pod="openshift-console-operator/console-operator-58897d9998-nnvtr" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.319587 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/145e5e24-2f94-48b2-be05-b08dbbb09312-secret-volume\") pod \"collect-profiles-29497185-9b5gp\" (UID: \"145e5e24-2f94-48b2-be05-b08dbbb09312\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497185-9b5gp" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.319683 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6r69\" (UniqueName: \"kubernetes.io/projected/56542c6f-259e-42a8-b62a-ea0ac38af319-kube-api-access-p6r69\") pod \"packageserver-d55dfcdfc-67p2b\" (UID: \"56542c6f-259e-42a8-b62a-ea0ac38af319\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-67p2b" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.319703 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hdtc\" (UniqueName: \"kubernetes.io/projected/83ec5b09-78b8-44c8-8530-2375417e0c97-kube-api-access-9hdtc\") pod \"machine-config-server-pgpmm\" (UID: \"83ec5b09-78b8-44c8-8530-2375417e0c97\") " pod="openshift-machine-config-operator/machine-config-server-pgpmm" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.319740 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgkbw\" (UniqueName: \"kubernetes.io/projected/5608a012-09d1-4c57-9371-715625086e4d-kube-api-access-qgkbw\") pod \"ingress-canary-xf9cn\" (UID: \"5608a012-09d1-4c57-9371-715625086e4d\") " pod="openshift-ingress-canary/ingress-canary-xf9cn" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.319771 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dv2f\" (UniqueName: \"kubernetes.io/projected/4f281370-6419-4dfb-b21f-9d1c9c7eddaa-kube-api-access-8dv2f\") pod \"console-f9d7485db-wjsth\" (UID: \"4f281370-6419-4dfb-b21f-9d1c9c7eddaa\") " pod="openshift-console/console-f9d7485db-wjsth" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.319814 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3dc6c6a9-3322-42db-a408-9a03c18a7531-registration-dir\") pod \"csi-hostpathplugin-82j72\" (UID: \"3dc6c6a9-3322-42db-a408-9a03c18a7531\") " pod="hostpath-provisioner/csi-hostpathplugin-82j72" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.319834 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/364c92fa-bb5a-428d-a999-eb6415d3f307-serving-cert\") pod \"service-ca-operator-777779d784-sjv9d\" (UID: \"364c92fa-bb5a-428d-a999-eb6415d3f307\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-sjv9d" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.319937 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0c33fc16-1215-438a-93e6-840ca5444e75-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-s9tvt\" (UID: \"0c33fc16-1215-438a-93e6-840ca5444e75\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s9tvt" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.319960 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f18d0f3d-32c1-40d3-99da-969208958cf4-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-xzsnp\" (UID: \"f18d0f3d-32c1-40d3-99da-969208958cf4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xzsnp" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.319991 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e7b15a28-215c-46ab-b4c7-a46f5e8205ae-etcd-client\") pod \"etcd-operator-b45778765-4qz94\" (UID: \"e7b15a28-215c-46ab-b4c7-a46f5e8205ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4qz94" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.320025 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d5757a8b-5d94-46d5-bc18-4a6c757a9ff2-images\") pod \"machine-config-operator-74547568cd-rs9cq\" (UID: \"d5757a8b-5d94-46d5-bc18-4a6c757a9ff2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rs9cq" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.320191 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5c9ccc10-6c02-463f-b2fd-a89fcacdb598-registry-tls\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.320210 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/edfe2658-7a40-43db-a17b-72d1ea1fde3d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-lcnlf\" (UID: \"edfe2658-7a40-43db-a17b-72d1ea1fde3d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lcnlf" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.320247 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9356361a-5000-468f-bb1d-17460cd2e9dc-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-j2rkz\" (UID: \"9356361a-5000-468f-bb1d-17460cd2e9dc\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j2rkz" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.320282 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/031121b1-b221-434c-91c0-d9b433cd6e7c-profile-collector-cert\") pod \"olm-operator-6b444d44fb-97sgp\" (UID: \"031121b1-b221-434c-91c0-d9b433cd6e7c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-97sgp" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.320303 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftz5x\" (UniqueName: \"kubernetes.io/projected/9356361a-5000-468f-bb1d-17460cd2e9dc-kube-api-access-ftz5x\") pod \"machine-config-controller-84d6567774-j2rkz\" (UID: \"9356361a-5000-468f-bb1d-17460cd2e9dc\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j2rkz" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.320322 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/83ec5b09-78b8-44c8-8530-2375417e0c97-certs\") pod \"machine-config-server-pgpmm\" (UID: \"83ec5b09-78b8-44c8-8530-2375417e0c97\") " pod="openshift-machine-config-operator/machine-config-server-pgpmm" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.320360 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/56542c6f-259e-42a8-b62a-ea0ac38af319-tmpfs\") pod \"packageserver-d55dfcdfc-67p2b\" (UID: \"56542c6f-259e-42a8-b62a-ea0ac38af319\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-67p2b" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.320379 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/18e6f9b3-4be1-4a96-9a05-f42b40f4c2fe-signing-key\") pod \"service-ca-9c57cc56f-dndtw\" (UID: \"18e6f9b3-4be1-4a96-9a05-f42b40f4c2fe\") " pod="openshift-service-ca/service-ca-9c57cc56f-dndtw" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.322814 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5c9ccc10-6c02-463f-b2fd-a89fcacdb598-registry-certificates\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.324507 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7b15a28-215c-46ab-b4c7-a46f5e8205ae-serving-cert\") pod \"etcd-operator-b45778765-4qz94\" (UID: \"e7b15a28-215c-46ab-b4c7-a46f5e8205ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4qz94" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.325139 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86d8d0d4-69ef-439d-b516-01b8d02cf5ce-config\") pod \"console-operator-58897d9998-nnvtr\" (UID: \"86d8d0d4-69ef-439d-b516-01b8d02cf5ce\") " pod="openshift-console-operator/console-operator-58897d9998-nnvtr" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.327118 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5c9ccc10-6c02-463f-b2fd-a89fcacdb598-installation-pull-secrets\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.329962 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4f281370-6419-4dfb-b21f-9d1c9c7eddaa-console-serving-cert\") pod \"console-f9d7485db-wjsth\" (UID: \"4f281370-6419-4dfb-b21f-9d1c9c7eddaa\") " pod="openshift-console/console-f9d7485db-wjsth" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.331022 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f281370-6419-4dfb-b21f-9d1c9c7eddaa-trusted-ca-bundle\") pod \"console-f9d7485db-wjsth\" (UID: \"4f281370-6419-4dfb-b21f-9d1c9c7eddaa\") " pod="openshift-console/console-f9d7485db-wjsth" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.331437 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4f281370-6419-4dfb-b21f-9d1c9c7eddaa-console-oauth-config\") pod \"console-f9d7485db-wjsth\" (UID: \"4f281370-6419-4dfb-b21f-9d1c9c7eddaa\") " pod="openshift-console/console-f9d7485db-wjsth" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.331562 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4f281370-6419-4dfb-b21f-9d1c9c7eddaa-oauth-serving-cert\") pod \"console-f9d7485db-wjsth\" (UID: \"4f281370-6419-4dfb-b21f-9d1c9c7eddaa\") " pod="openshift-console/console-f9d7485db-wjsth" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.331637 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4f281370-6419-4dfb-b21f-9d1c9c7eddaa-console-config\") pod \"console-f9d7485db-wjsth\" (UID: \"4f281370-6419-4dfb-b21f-9d1c9c7eddaa\") " pod="openshift-console/console-f9d7485db-wjsth" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.335534 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.336535 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86d8d0d4-69ef-439d-b516-01b8d02cf5ce-serving-cert\") pod \"console-operator-58897d9998-nnvtr\" (UID: \"86d8d0d4-69ef-439d-b516-01b8d02cf5ce\") " pod="openshift-console-operator/console-operator-58897d9998-nnvtr" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.338670 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a49a8a9-82b6-4374-a43a-224f2f9e14a4-config\") pod \"authentication-operator-69f744f599-xhcb5\" (UID: \"3a49a8a9-82b6-4374-a43a-224f2f9e14a4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xhcb5" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.342807 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e7b15a28-215c-46ab-b4c7-a46f5e8205ae-etcd-client\") pod \"etcd-operator-b45778765-4qz94\" (UID: \"e7b15a28-215c-46ab-b4c7-a46f5e8205ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4qz94" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.343714 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/402d584b-6176-4cee-8e27-cc233b48feec-config\") pod \"openshift-apiserver-operator-796bbdcf4f-mt595\" (UID: \"402d584b-6176-4cee-8e27-cc233b48feec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mt595" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.354085 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5c9ccc10-6c02-463f-b2fd-a89fcacdb598-registry-tls\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.369052 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.371296 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-sbbxx"] Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.375777 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.381442 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9b119a43-b446-4226-9490-a7ba5baf2815-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rzjpv\" (UID: \"9b119a43-b446-4226-9490-a7ba5baf2815\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzjpv" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.386847 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cbf59768-adfb-48f6-b68b-ebf1675f1807-etcd-serving-ca\") pod \"apiserver-76f77b778f-7txvq\" (UID: \"cbf59768-adfb-48f6-b68b-ebf1675f1807\") " pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.394559 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 31 03:50:28 crc kubenswrapper[4667]: W0131 03:50:28.394551 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0400a903_d02e_41b4_99f3_3c7b57744839.slice/crio-6a5e166c81dd66c5ff22e25cd901dccd5efa5b8e89d162be0cc158ac06aff32b WatchSource:0}: Error finding container 6a5e166c81dd66c5ff22e25cd901dccd5efa5b8e89d162be0cc158ac06aff32b: Status 404 returned error can't find the container with id 6a5e166c81dd66c5ff22e25cd901dccd5efa5b8e89d162be0cc158ac06aff32b Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.404036 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbvlq\" (UniqueName: \"kubernetes.io/projected/3a49a8a9-82b6-4374-a43a-224f2f9e14a4-kube-api-access-rbvlq\") pod \"authentication-operator-69f744f599-xhcb5\" (UID: \"3a49a8a9-82b6-4374-a43a-224f2f9e14a4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xhcb5" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.408804 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-bclzx"] Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.414060 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.421925 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422116 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nnqh\" (UniqueName: \"kubernetes.io/projected/145e5e24-2f94-48b2-be05-b08dbbb09312-kube-api-access-9nnqh\") pod \"collect-profiles-29497185-9b5gp\" (UID: \"145e5e24-2f94-48b2-be05-b08dbbb09312\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497185-9b5gp" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422144 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5phmx\" (UniqueName: \"kubernetes.io/projected/9af91113-a315-4416-a1f2-6566c16278cf-kube-api-access-5phmx\") pod \"openshift-config-operator-7777fb866f-ms8lf\" (UID: \"9af91113-a315-4416-a1f2-6566c16278cf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ms8lf" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422165 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/56542c6f-259e-42a8-b62a-ea0ac38af319-webhook-cert\") pod \"packageserver-d55dfcdfc-67p2b\" (UID: \"56542c6f-259e-42a8-b62a-ea0ac38af319\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-67p2b" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422180 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edfe2658-7a40-43db-a17b-72d1ea1fde3d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-lcnlf\" (UID: \"edfe2658-7a40-43db-a17b-72d1ea1fde3d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lcnlf" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422199 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbbace8c-06bb-4b50-a132-a681482dc9e5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-stnvq\" (UID: \"dbbace8c-06bb-4b50-a132-a681482dc9e5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-stnvq" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422215 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bc826929-768c-408f-a0f9-74bd29154340-profile-collector-cert\") pod \"catalog-operator-68c6474976-nrjrr\" (UID: \"bc826929-768c-408f-a0f9-74bd29154340\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nrjrr" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422229 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlkfw\" (UniqueName: \"kubernetes.io/projected/18e6f9b3-4be1-4a96-9a05-f42b40f4c2fe-kube-api-access-rlkfw\") pod \"service-ca-9c57cc56f-dndtw\" (UID: \"18e6f9b3-4be1-4a96-9a05-f42b40f4c2fe\") " pod="openshift-service-ca/service-ca-9c57cc56f-dndtw" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422254 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd4qt\" (UniqueName: \"kubernetes.io/projected/9ce6e7c4-30b6-4812-8b50-bf81a13f7b9d-kube-api-access-hd4qt\") pod \"package-server-manager-789f6589d5-vsb97\" (UID: \"9ce6e7c4-30b6-4812-8b50-bf81a13f7b9d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vsb97" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422276 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d5757a8b-5d94-46d5-bc18-4a6c757a9ff2-proxy-tls\") pod \"machine-config-operator-74547568cd-rs9cq\" (UID: \"d5757a8b-5d94-46d5-bc18-4a6c757a9ff2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rs9cq" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422301 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v87mm\" (UniqueName: \"kubernetes.io/projected/364c92fa-bb5a-428d-a999-eb6415d3f307-kube-api-access-v87mm\") pod \"service-ca-operator-777779d784-sjv9d\" (UID: \"364c92fa-bb5a-428d-a999-eb6415d3f307\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-sjv9d" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422322 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed8b36d0-771d-48bb-9393-db864ff8ff84-service-ca-bundle\") pod \"router-default-5444994796-kvgs8\" (UID: \"ed8b36d0-771d-48bb-9393-db864ff8ff84\") " pod="openshift-ingress/router-default-5444994796-kvgs8" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422339 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/889f5458-9c03-4be0-8f99-848f68c3ecc8-config\") pod \"kube-controller-manager-operator-78b949d7b-9pdkp\" (UID: \"889f5458-9c03-4be0-8f99-848f68c3ecc8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9pdkp" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422355 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d426c096-b6d9-4696-8066-2b9ec75356af-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7pbrg\" (UID: \"d426c096-b6d9-4696-8066-2b9ec75356af\") " pod="openshift-marketplace/marketplace-operator-79b997595-7pbrg" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422370 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs2h4\" (UniqueName: \"kubernetes.io/projected/bc826929-768c-408f-a0f9-74bd29154340-kube-api-access-gs2h4\") pod \"catalog-operator-68c6474976-nrjrr\" (UID: \"bc826929-768c-408f-a0f9-74bd29154340\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nrjrr" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422385 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/3dc6c6a9-3322-42db-a408-9a03c18a7531-mountpoint-dir\") pod \"csi-hostpathplugin-82j72\" (UID: \"3dc6c6a9-3322-42db-a408-9a03c18a7531\") " pod="hostpath-provisioner/csi-hostpathplugin-82j72" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422400 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cmkr\" (UniqueName: \"kubernetes.io/projected/3dc6c6a9-3322-42db-a408-9a03c18a7531-kube-api-access-9cmkr\") pod \"csi-hostpathplugin-82j72\" (UID: \"3dc6c6a9-3322-42db-a408-9a03c18a7531\") " pod="hostpath-provisioner/csi-hostpathplugin-82j72" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422416 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9356361a-5000-468f-bb1d-17460cd2e9dc-proxy-tls\") pod \"machine-config-controller-84d6567774-j2rkz\" (UID: \"9356361a-5000-468f-bb1d-17460cd2e9dc\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j2rkz" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422434 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-527dl\" (UniqueName: \"kubernetes.io/projected/d426c096-b6d9-4696-8066-2b9ec75356af-kube-api-access-527dl\") pod \"marketplace-operator-79b997595-7pbrg\" (UID: \"d426c096-b6d9-4696-8066-2b9ec75356af\") " pod="openshift-marketplace/marketplace-operator-79b997595-7pbrg" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422449 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26736509-60d6-4b4e-94b2-1fef29fa0c91-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-nvj6q\" (UID: \"26736509-60d6-4b4e-94b2-1fef29fa0c91\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nvj6q" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422467 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npsrq\" (UniqueName: \"kubernetes.io/projected/dbbace8c-06bb-4b50-a132-a681482dc9e5-kube-api-access-npsrq\") pod \"control-plane-machine-set-operator-78cbb6b69f-stnvq\" (UID: \"dbbace8c-06bb-4b50-a132-a681482dc9e5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-stnvq" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422494 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ed8b36d0-771d-48bb-9393-db864ff8ff84-default-certificate\") pod \"router-default-5444994796-kvgs8\" (UID: \"ed8b36d0-771d-48bb-9393-db864ff8ff84\") " pod="openshift-ingress/router-default-5444994796-kvgs8" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422510 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/145e5e24-2f94-48b2-be05-b08dbbb09312-config-volume\") pod \"collect-profiles-29497185-9b5gp\" (UID: \"145e5e24-2f94-48b2-be05-b08dbbb09312\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497185-9b5gp" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422526 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bc826929-768c-408f-a0f9-74bd29154340-srv-cert\") pod \"catalog-operator-68c6474976-nrjrr\" (UID: \"bc826929-768c-408f-a0f9-74bd29154340\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nrjrr" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422544 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/3dc6c6a9-3322-42db-a408-9a03c18a7531-plugins-dir\") pod \"csi-hostpathplugin-82j72\" (UID: \"3dc6c6a9-3322-42db-a408-9a03c18a7531\") " pod="hostpath-provisioner/csi-hostpathplugin-82j72" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422562 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcjss\" (UniqueName: \"kubernetes.io/projected/d5757a8b-5d94-46d5-bc18-4a6c757a9ff2-kube-api-access-hcjss\") pod \"machine-config-operator-74547568cd-rs9cq\" (UID: \"d5757a8b-5d94-46d5-bc18-4a6c757a9ff2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rs9cq" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422578 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d44cf093-fd97-4adf-bdad-2c3fdb4157d7-metrics-tls\") pod \"dns-default-kfr9j\" (UID: \"d44cf093-fd97-4adf-bdad-2c3fdb4157d7\") " pod="openshift-dns/dns-default-kfr9j" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422601 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c33fc16-1215-438a-93e6-840ca5444e75-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-s9tvt\" (UID: \"0c33fc16-1215-438a-93e6-840ca5444e75\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s9tvt" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422616 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d426c096-b6d9-4696-8066-2b9ec75356af-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7pbrg\" (UID: \"d426c096-b6d9-4696-8066-2b9ec75356af\") " pod="openshift-marketplace/marketplace-operator-79b997595-7pbrg" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422631 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5608a012-09d1-4c57-9371-715625086e4d-cert\") pod \"ingress-canary-xf9cn\" (UID: \"5608a012-09d1-4c57-9371-715625086e4d\") " pod="openshift-ingress-canary/ingress-canary-xf9cn" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422655 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26736509-60d6-4b4e-94b2-1fef29fa0c91-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-nvj6q\" (UID: \"26736509-60d6-4b4e-94b2-1fef29fa0c91\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nvj6q" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422669 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d44cf093-fd97-4adf-bdad-2c3fdb4157d7-config-volume\") pod \"dns-default-kfr9j\" (UID: \"d44cf093-fd97-4adf-bdad-2c3fdb4157d7\") " pod="openshift-dns/dns-default-kfr9j" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422685 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ed8b36d0-771d-48bb-9393-db864ff8ff84-stats-auth\") pod \"router-default-5444994796-kvgs8\" (UID: \"ed8b36d0-771d-48bb-9393-db864ff8ff84\") " pod="openshift-ingress/router-default-5444994796-kvgs8" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422698 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3dc6c6a9-3322-42db-a408-9a03c18a7531-socket-dir\") pod \"csi-hostpathplugin-82j72\" (UID: \"3dc6c6a9-3322-42db-a408-9a03c18a7531\") " pod="hostpath-provisioner/csi-hostpathplugin-82j72" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422728 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c33fc16-1215-438a-93e6-840ca5444e75-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-s9tvt\" (UID: \"0c33fc16-1215-438a-93e6-840ca5444e75\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s9tvt" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422753 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/889f5458-9c03-4be0-8f99-848f68c3ecc8-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-9pdkp\" (UID: \"889f5458-9c03-4be0-8f99-848f68c3ecc8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9pdkp" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422768 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/18e6f9b3-4be1-4a96-9a05-f42b40f4c2fe-signing-cabundle\") pod \"service-ca-9c57cc56f-dndtw\" (UID: \"18e6f9b3-4be1-4a96-9a05-f42b40f4c2fe\") " pod="openshift-service-ca/service-ca-9c57cc56f-dndtw" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422783 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/3dc6c6a9-3322-42db-a408-9a03c18a7531-csi-data-dir\") pod \"csi-hostpathplugin-82j72\" (UID: \"3dc6c6a9-3322-42db-a408-9a03c18a7531\") " pod="hostpath-provisioner/csi-hostpathplugin-82j72" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422797 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed8b36d0-771d-48bb-9393-db864ff8ff84-metrics-certs\") pod \"router-default-5444994796-kvgs8\" (UID: \"ed8b36d0-771d-48bb-9393-db864ff8ff84\") " pod="openshift-ingress/router-default-5444994796-kvgs8" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422822 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/031121b1-b221-434c-91c0-d9b433cd6e7c-srv-cert\") pod \"olm-operator-6b444d44fb-97sgp\" (UID: \"031121b1-b221-434c-91c0-d9b433cd6e7c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-97sgp" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422846 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfxwj\" (UniqueName: \"kubernetes.io/projected/031121b1-b221-434c-91c0-d9b433cd6e7c-kube-api-access-dfxwj\") pod \"olm-operator-6b444d44fb-97sgp\" (UID: \"031121b1-b221-434c-91c0-d9b433cd6e7c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-97sgp" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422862 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxdfh\" (UniqueName: \"kubernetes.io/projected/ed8b36d0-771d-48bb-9393-db864ff8ff84-kube-api-access-dxdfh\") pod \"router-default-5444994796-kvgs8\" (UID: \"ed8b36d0-771d-48bb-9393-db864ff8ff84\") " pod="openshift-ingress/router-default-5444994796-kvgs8" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422893 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d5757a8b-5d94-46d5-bc18-4a6c757a9ff2-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rs9cq\" (UID: \"d5757a8b-5d94-46d5-bc18-4a6c757a9ff2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rs9cq" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422914 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/83ec5b09-78b8-44c8-8530-2375417e0c97-node-bootstrap-token\") pod \"machine-config-server-pgpmm\" (UID: \"83ec5b09-78b8-44c8-8530-2375417e0c97\") " pod="openshift-machine-config-operator/machine-config-server-pgpmm" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422930 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/56542c6f-259e-42a8-b62a-ea0ac38af319-apiservice-cert\") pod \"packageserver-d55dfcdfc-67p2b\" (UID: \"56542c6f-259e-42a8-b62a-ea0ac38af319\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-67p2b" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422948 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/9ce6e7c4-30b6-4812-8b50-bf81a13f7b9d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-vsb97\" (UID: \"9ce6e7c4-30b6-4812-8b50-bf81a13f7b9d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vsb97" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422964 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/364c92fa-bb5a-428d-a999-eb6415d3f307-config\") pod \"service-ca-operator-777779d784-sjv9d\" (UID: \"364c92fa-bb5a-428d-a999-eb6415d3f307\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-sjv9d" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422979 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89f42\" (UniqueName: \"kubernetes.io/projected/26736509-60d6-4b4e-94b2-1fef29fa0c91-kube-api-access-89f42\") pod \"kube-storage-version-migrator-operator-b67b599dd-nvj6q\" (UID: \"26736509-60d6-4b4e-94b2-1fef29fa0c91\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nvj6q" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.422996 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j2sj\" (UniqueName: \"kubernetes.io/projected/d44cf093-fd97-4adf-bdad-2c3fdb4157d7-kube-api-access-5j2sj\") pod \"dns-default-kfr9j\" (UID: \"d44cf093-fd97-4adf-bdad-2c3fdb4157d7\") " pod="openshift-dns/dns-default-kfr9j" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.423012 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6sz5\" (UniqueName: \"kubernetes.io/projected/f18d0f3d-32c1-40d3-99da-969208958cf4-kube-api-access-r6sz5\") pod \"multus-admission-controller-857f4d67dd-xzsnp\" (UID: \"f18d0f3d-32c1-40d3-99da-969208958cf4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xzsnp" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.423028 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87nnm\" (UniqueName: \"kubernetes.io/projected/5236f7ce-22c8-4283-9046-72fd92d2b7a7-kube-api-access-87nnm\") pod \"migrator-59844c95c7-czqrm\" (UID: \"5236f7ce-22c8-4283-9046-72fd92d2b7a7\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-czqrm" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.423043 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edfe2658-7a40-43db-a17b-72d1ea1fde3d-config\") pod \"kube-apiserver-operator-766d6c64bb-lcnlf\" (UID: \"edfe2658-7a40-43db-a17b-72d1ea1fde3d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lcnlf" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.423057 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/889f5458-9c03-4be0-8f99-848f68c3ecc8-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-9pdkp\" (UID: \"889f5458-9c03-4be0-8f99-848f68c3ecc8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9pdkp" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.423077 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/145e5e24-2f94-48b2-be05-b08dbbb09312-secret-volume\") pod \"collect-profiles-29497185-9b5gp\" (UID: \"145e5e24-2f94-48b2-be05-b08dbbb09312\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497185-9b5gp" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.423092 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6r69\" (UniqueName: \"kubernetes.io/projected/56542c6f-259e-42a8-b62a-ea0ac38af319-kube-api-access-p6r69\") pod \"packageserver-d55dfcdfc-67p2b\" (UID: \"56542c6f-259e-42a8-b62a-ea0ac38af319\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-67p2b" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.423108 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hdtc\" (UniqueName: \"kubernetes.io/projected/83ec5b09-78b8-44c8-8530-2375417e0c97-kube-api-access-9hdtc\") pod \"machine-config-server-pgpmm\" (UID: \"83ec5b09-78b8-44c8-8530-2375417e0c97\") " pod="openshift-machine-config-operator/machine-config-server-pgpmm" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.423128 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgkbw\" (UniqueName: \"kubernetes.io/projected/5608a012-09d1-4c57-9371-715625086e4d-kube-api-access-qgkbw\") pod \"ingress-canary-xf9cn\" (UID: \"5608a012-09d1-4c57-9371-715625086e4d\") " pod="openshift-ingress-canary/ingress-canary-xf9cn" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.423146 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3dc6c6a9-3322-42db-a408-9a03c18a7531-registration-dir\") pod \"csi-hostpathplugin-82j72\" (UID: \"3dc6c6a9-3322-42db-a408-9a03c18a7531\") " pod="hostpath-provisioner/csi-hostpathplugin-82j72" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.423161 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/364c92fa-bb5a-428d-a999-eb6415d3f307-serving-cert\") pod \"service-ca-operator-777779d784-sjv9d\" (UID: \"364c92fa-bb5a-428d-a999-eb6415d3f307\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-sjv9d" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.423175 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f18d0f3d-32c1-40d3-99da-969208958cf4-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-xzsnp\" (UID: \"f18d0f3d-32c1-40d3-99da-969208958cf4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xzsnp" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.423190 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d5757a8b-5d94-46d5-bc18-4a6c757a9ff2-images\") pod \"machine-config-operator-74547568cd-rs9cq\" (UID: \"d5757a8b-5d94-46d5-bc18-4a6c757a9ff2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rs9cq" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.423203 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0c33fc16-1215-438a-93e6-840ca5444e75-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-s9tvt\" (UID: \"0c33fc16-1215-438a-93e6-840ca5444e75\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s9tvt" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.423221 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/edfe2658-7a40-43db-a17b-72d1ea1fde3d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-lcnlf\" (UID: \"edfe2658-7a40-43db-a17b-72d1ea1fde3d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lcnlf" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.423244 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9356361a-5000-468f-bb1d-17460cd2e9dc-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-j2rkz\" (UID: \"9356361a-5000-468f-bb1d-17460cd2e9dc\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j2rkz" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.423258 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/031121b1-b221-434c-91c0-d9b433cd6e7c-profile-collector-cert\") pod \"olm-operator-6b444d44fb-97sgp\" (UID: \"031121b1-b221-434c-91c0-d9b433cd6e7c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-97sgp" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.423272 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/83ec5b09-78b8-44c8-8530-2375417e0c97-certs\") pod \"machine-config-server-pgpmm\" (UID: \"83ec5b09-78b8-44c8-8530-2375417e0c97\") " pod="openshift-machine-config-operator/machine-config-server-pgpmm" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.423287 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/56542c6f-259e-42a8-b62a-ea0ac38af319-tmpfs\") pod \"packageserver-d55dfcdfc-67p2b\" (UID: \"56542c6f-259e-42a8-b62a-ea0ac38af319\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-67p2b" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.423302 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftz5x\" (UniqueName: \"kubernetes.io/projected/9356361a-5000-468f-bb1d-17460cd2e9dc-kube-api-access-ftz5x\") pod \"machine-config-controller-84d6567774-j2rkz\" (UID: \"9356361a-5000-468f-bb1d-17460cd2e9dc\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j2rkz" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.423317 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/18e6f9b3-4be1-4a96-9a05-f42b40f4c2fe-signing-key\") pod \"service-ca-9c57cc56f-dndtw\" (UID: \"18e6f9b3-4be1-4a96-9a05-f42b40f4c2fe\") " pod="openshift-service-ca/service-ca-9c57cc56f-dndtw" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.424144 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3dc6c6a9-3322-42db-a408-9a03c18a7531-socket-dir\") pod \"csi-hostpathplugin-82j72\" (UID: \"3dc6c6a9-3322-42db-a408-9a03c18a7531\") " pod="hostpath-provisioner/csi-hostpathplugin-82j72" Jan 31 03:50:28 crc kubenswrapper[4667]: E0131 03:50:28.424217 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:28.924202417 +0000 UTC m=+152.440537716 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.427962 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26736509-60d6-4b4e-94b2-1fef29fa0c91-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-nvj6q\" (UID: \"26736509-60d6-4b4e-94b2-1fef29fa0c91\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nvj6q" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.428337 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9356361a-5000-468f-bb1d-17460cd2e9dc-proxy-tls\") pod \"machine-config-controller-84d6567774-j2rkz\" (UID: \"9356361a-5000-468f-bb1d-17460cd2e9dc\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j2rkz" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.428404 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d426c096-b6d9-4696-8066-2b9ec75356af-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7pbrg\" (UID: \"d426c096-b6d9-4696-8066-2b9ec75356af\") " pod="openshift-marketplace/marketplace-operator-79b997595-7pbrg" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.428711 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/3dc6c6a9-3322-42db-a408-9a03c18a7531-mountpoint-dir\") pod \"csi-hostpathplugin-82j72\" (UID: \"3dc6c6a9-3322-42db-a408-9a03c18a7531\") " pod="hostpath-provisioner/csi-hostpathplugin-82j72" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.429206 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed8b36d0-771d-48bb-9393-db864ff8ff84-service-ca-bundle\") pod \"router-default-5444994796-kvgs8\" (UID: \"ed8b36d0-771d-48bb-9393-db864ff8ff84\") " pod="openshift-ingress/router-default-5444994796-kvgs8" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.429943 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/889f5458-9c03-4be0-8f99-848f68c3ecc8-config\") pod \"kube-controller-manager-operator-78b949d7b-9pdkp\" (UID: \"889f5458-9c03-4be0-8f99-848f68c3ecc8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9pdkp" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.431183 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d5757a8b-5d94-46d5-bc18-4a6c757a9ff2-proxy-tls\") pod \"machine-config-operator-74547568cd-rs9cq\" (UID: \"d5757a8b-5d94-46d5-bc18-4a6c757a9ff2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rs9cq" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.431708 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c33fc16-1215-438a-93e6-840ca5444e75-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-s9tvt\" (UID: \"0c33fc16-1215-438a-93e6-840ca5444e75\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s9tvt" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.432103 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/18e6f9b3-4be1-4a96-9a05-f42b40f4c2fe-signing-key\") pod \"service-ca-9c57cc56f-dndtw\" (UID: \"18e6f9b3-4be1-4a96-9a05-f42b40f4c2fe\") " pod="openshift-service-ca/service-ca-9c57cc56f-dndtw" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.432986 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d44cf093-fd97-4adf-bdad-2c3fdb4157d7-config-volume\") pod \"dns-default-kfr9j\" (UID: \"d44cf093-fd97-4adf-bdad-2c3fdb4157d7\") " pod="openshift-dns/dns-default-kfr9j" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.433635 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c33fc16-1215-438a-93e6-840ca5444e75-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-s9tvt\" (UID: \"0c33fc16-1215-438a-93e6-840ca5444e75\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s9tvt" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.433736 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/145e5e24-2f94-48b2-be05-b08dbbb09312-config-volume\") pod \"collect-profiles-29497185-9b5gp\" (UID: \"145e5e24-2f94-48b2-be05-b08dbbb09312\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497185-9b5gp" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.436099 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5phmx\" (UniqueName: \"kubernetes.io/projected/9af91113-a315-4416-a1f2-6566c16278cf-kube-api-access-5phmx\") pod \"openshift-config-operator-7777fb866f-ms8lf\" (UID: \"9af91113-a315-4416-a1f2-6566c16278cf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ms8lf" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.436618 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/889f5458-9c03-4be0-8f99-848f68c3ecc8-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-9pdkp\" (UID: \"889f5458-9c03-4be0-8f99-848f68c3ecc8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9pdkp" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.436776 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.437139 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/3dc6c6a9-3322-42db-a408-9a03c18a7531-plugins-dir\") pod \"csi-hostpathplugin-82j72\" (UID: \"3dc6c6a9-3322-42db-a408-9a03c18a7531\") " pod="hostpath-provisioner/csi-hostpathplugin-82j72" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.437224 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ed8b36d0-771d-48bb-9393-db864ff8ff84-default-certificate\") pod \"router-default-5444994796-kvgs8\" (UID: \"ed8b36d0-771d-48bb-9393-db864ff8ff84\") " pod="openshift-ingress/router-default-5444994796-kvgs8" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.437617 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ed8b36d0-771d-48bb-9393-db864ff8ff84-stats-auth\") pod \"router-default-5444994796-kvgs8\" (UID: \"ed8b36d0-771d-48bb-9393-db864ff8ff84\") " pod="openshift-ingress/router-default-5444994796-kvgs8" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.438684 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/364c92fa-bb5a-428d-a999-eb6415d3f307-config\") pod \"service-ca-operator-777779d784-sjv9d\" (UID: \"364c92fa-bb5a-428d-a999-eb6415d3f307\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-sjv9d" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.439266 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d5757a8b-5d94-46d5-bc18-4a6c757a9ff2-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rs9cq\" (UID: \"d5757a8b-5d94-46d5-bc18-4a6c757a9ff2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rs9cq" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.440172 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/18e6f9b3-4be1-4a96-9a05-f42b40f4c2fe-signing-cabundle\") pod \"service-ca-9c57cc56f-dndtw\" (UID: \"18e6f9b3-4be1-4a96-9a05-f42b40f4c2fe\") " pod="openshift-service-ca/service-ca-9c57cc56f-dndtw" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.440251 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/3dc6c6a9-3322-42db-a408-9a03c18a7531-csi-data-dir\") pod \"csi-hostpathplugin-82j72\" (UID: \"3dc6c6a9-3322-42db-a408-9a03c18a7531\") " pod="hostpath-provisioner/csi-hostpathplugin-82j72" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.440914 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edfe2658-7a40-43db-a17b-72d1ea1fde3d-config\") pod \"kube-apiserver-operator-766d6c64bb-lcnlf\" (UID: \"edfe2658-7a40-43db-a17b-72d1ea1fde3d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lcnlf" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.441752 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3dc6c6a9-3322-42db-a408-9a03c18a7531-registration-dir\") pod \"csi-hostpathplugin-82j72\" (UID: \"3dc6c6a9-3322-42db-a408-9a03c18a7531\") " pod="hostpath-provisioner/csi-hostpathplugin-82j72" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.445358 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d5757a8b-5d94-46d5-bc18-4a6c757a9ff2-images\") pod \"machine-config-operator-74547568cd-rs9cq\" (UID: \"d5757a8b-5d94-46d5-bc18-4a6c757a9ff2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rs9cq" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.446956 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9356361a-5000-468f-bb1d-17460cd2e9dc-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-j2rkz\" (UID: \"9356361a-5000-468f-bb1d-17460cd2e9dc\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j2rkz" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.450347 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/56542c6f-259e-42a8-b62a-ea0ac38af319-tmpfs\") pod \"packageserver-d55dfcdfc-67p2b\" (UID: \"56542c6f-259e-42a8-b62a-ea0ac38af319\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-67p2b" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.450724 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/83ec5b09-78b8-44c8-8530-2375417e0c97-node-bootstrap-token\") pod \"machine-config-server-pgpmm\" (UID: \"83ec5b09-78b8-44c8-8530-2375417e0c97\") " pod="openshift-machine-config-operator/machine-config-server-pgpmm" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.450932 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbbace8c-06bb-4b50-a132-a681482dc9e5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-stnvq\" (UID: \"dbbace8c-06bb-4b50-a132-a681482dc9e5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-stnvq" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.450991 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/145e5e24-2f94-48b2-be05-b08dbbb09312-secret-volume\") pod \"collect-profiles-29497185-9b5gp\" (UID: \"145e5e24-2f94-48b2-be05-b08dbbb09312\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497185-9b5gp" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.451290 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26736509-60d6-4b4e-94b2-1fef29fa0c91-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-nvj6q\" (UID: \"26736509-60d6-4b4e-94b2-1fef29fa0c91\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nvj6q" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.451415 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/56542c6f-259e-42a8-b62a-ea0ac38af319-webhook-cert\") pod \"packageserver-d55dfcdfc-67p2b\" (UID: \"56542c6f-259e-42a8-b62a-ea0ac38af319\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-67p2b" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.451603 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d44cf093-fd97-4adf-bdad-2c3fdb4157d7-metrics-tls\") pod \"dns-default-kfr9j\" (UID: \"d44cf093-fd97-4adf-bdad-2c3fdb4157d7\") " pod="openshift-dns/dns-default-kfr9j" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.451604 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/364c92fa-bb5a-428d-a999-eb6415d3f307-serving-cert\") pod \"service-ca-operator-777779d784-sjv9d\" (UID: \"364c92fa-bb5a-428d-a999-eb6415d3f307\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-sjv9d" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.452143 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bc826929-768c-408f-a0f9-74bd29154340-srv-cert\") pod \"catalog-operator-68c6474976-nrjrr\" (UID: \"bc826929-768c-408f-a0f9-74bd29154340\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nrjrr" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.452809 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edfe2658-7a40-43db-a17b-72d1ea1fde3d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-lcnlf\" (UID: \"edfe2658-7a40-43db-a17b-72d1ea1fde3d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lcnlf" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.456883 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5608a012-09d1-4c57-9371-715625086e4d-cert\") pod \"ingress-canary-xf9cn\" (UID: \"5608a012-09d1-4c57-9371-715625086e4d\") " pod="openshift-ingress-canary/ingress-canary-xf9cn" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.457466 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/56542c6f-259e-42a8-b62a-ea0ac38af319-apiservice-cert\") pod \"packageserver-d55dfcdfc-67p2b\" (UID: \"56542c6f-259e-42a8-b62a-ea0ac38af319\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-67p2b" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.458302 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/031121b1-b221-434c-91c0-d9b433cd6e7c-srv-cert\") pod \"olm-operator-6b444d44fb-97sgp\" (UID: \"031121b1-b221-434c-91c0-d9b433cd6e7c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-97sgp" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.458816 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bc826929-768c-408f-a0f9-74bd29154340-profile-collector-cert\") pod \"catalog-operator-68c6474976-nrjrr\" (UID: \"bc826929-768c-408f-a0f9-74bd29154340\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nrjrr" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.459144 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f18d0f3d-32c1-40d3-99da-969208958cf4-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-xzsnp\" (UID: \"f18d0f3d-32c1-40d3-99da-969208958cf4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xzsnp" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.459418 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.460371 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a49a8a9-82b6-4374-a43a-224f2f9e14a4-service-ca-bundle\") pod \"authentication-operator-69f744f599-xhcb5\" (UID: \"3a49a8a9-82b6-4374-a43a-224f2f9e14a4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xhcb5" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.460424 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed8b36d0-771d-48bb-9393-db864ff8ff84-metrics-certs\") pod \"router-default-5444994796-kvgs8\" (UID: \"ed8b36d0-771d-48bb-9393-db864ff8ff84\") " pod="openshift-ingress/router-default-5444994796-kvgs8" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.460999 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/83ec5b09-78b8-44c8-8530-2375417e0c97-certs\") pod \"machine-config-server-pgpmm\" (UID: \"83ec5b09-78b8-44c8-8530-2375417e0c97\") " pod="openshift-machine-config-operator/machine-config-server-pgpmm" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.463661 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d426c096-b6d9-4696-8066-2b9ec75356af-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7pbrg\" (UID: \"d426c096-b6d9-4696-8066-2b9ec75356af\") " pod="openshift-marketplace/marketplace-operator-79b997595-7pbrg" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.468631 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/031121b1-b221-434c-91c0-d9b433cd6e7c-profile-collector-cert\") pod \"olm-operator-6b444d44fb-97sgp\" (UID: \"031121b1-b221-434c-91c0-d9b433cd6e7c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-97sgp" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.471787 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/9ce6e7c4-30b6-4812-8b50-bf81a13f7b9d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-vsb97\" (UID: \"9ce6e7c4-30b6-4812-8b50-bf81a13f7b9d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vsb97" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.474590 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 31 03:50:28 crc kubenswrapper[4667]: E0131 03:50:28.484352 4667 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: failed to sync configmap cache: timed out waiting for the condition Jan 31 03:50:28 crc kubenswrapper[4667]: E0131 03:50:28.484373 4667 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 31 03:50:28 crc kubenswrapper[4667]: E0131 03:50:28.484422 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9b119a43-b446-4226-9490-a7ba5baf2815-trusted-ca podName:9b119a43-b446-4226-9490-a7ba5baf2815 nodeName:}" failed. No retries permitted until 2026-01-31 03:50:29.484402073 +0000 UTC m=+153.000737372 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/9b119a43-b446-4226-9490-a7ba5baf2815-trusted-ca") pod "cluster-image-registry-operator-dc59b4c8b-rzjpv" (UID: "9b119a43-b446-4226-9490-a7ba5baf2815") : failed to sync configmap cache: timed out waiting for the condition Jan 31 03:50:28 crc kubenswrapper[4667]: E0131 03:50:28.484443 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3a49a8a9-82b6-4374-a43a-224f2f9e14a4-trusted-ca-bundle podName:3a49a8a9-82b6-4374-a43a-224f2f9e14a4 nodeName:}" failed. No retries permitted until 2026-01-31 03:50:29.484434343 +0000 UTC m=+153.000769642 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/3a49a8a9-82b6-4374-a43a-224f2f9e14a4-trusted-ca-bundle") pod "authentication-operator-69f744f599-xhcb5" (UID: "3a49a8a9-82b6-4374-a43a-224f2f9e14a4") : failed to sync configmap cache: timed out waiting for the condition Jan 31 03:50:28 crc kubenswrapper[4667]: E0131 03:50:28.484472 4667 secret.go:188] Couldn't get secret openshift-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Jan 31 03:50:28 crc kubenswrapper[4667]: E0131 03:50:28.484585 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbf59768-adfb-48f6-b68b-ebf1675f1807-etcd-client podName:cbf59768-adfb-48f6-b68b-ebf1675f1807 nodeName:}" failed. No retries permitted until 2026-01-31 03:50:29.484554467 +0000 UTC m=+153.000889956 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/cbf59768-adfb-48f6-b68b-ebf1675f1807-etcd-client") pod "apiserver-76f77b778f-7txvq" (UID: "cbf59768-adfb-48f6-b68b-ebf1675f1807") : failed to sync secret cache: timed out waiting for the condition Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.489599 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a49a8a9-82b6-4374-a43a-224f2f9e14a4-serving-cert\") pod \"authentication-operator-69f744f599-xhcb5\" (UID: \"3a49a8a9-82b6-4374-a43a-224f2f9e14a4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xhcb5" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.494640 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.516760 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.522229 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbf59768-adfb-48f6-b68b-ebf1675f1807-serving-cert\") pod \"apiserver-76f77b778f-7txvq\" (UID: \"cbf59768-adfb-48f6-b68b-ebf1675f1807\") " pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.530782 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:28 crc kubenswrapper[4667]: E0131 03:50:28.533406 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:29.033388637 +0000 UTC m=+152.549723936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.539211 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.545845 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc88x\" (UniqueName: \"kubernetes.io/projected/cbf59768-adfb-48f6-b68b-ebf1675f1807-kube-api-access-xc88x\") pod \"apiserver-76f77b778f-7txvq\" (UID: \"cbf59768-adfb-48f6-b68b-ebf1675f1807\") " pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.557439 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.585293 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.597831 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5c9ccc10-6c02-463f-b2fd-a89fcacdb598-trusted-ca\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.602594 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.613546 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ms8lf" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.634316 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:28 crc kubenswrapper[4667]: E0131 03:50:28.634977 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:29.134957239 +0000 UTC m=+152.651292538 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.648057 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5c9ccc10-6c02-463f-b2fd-a89fcacdb598-bound-sa-token\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.675982 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsnvh\" (UniqueName: \"kubernetes.io/projected/5c9ccc10-6c02-463f-b2fd-a89fcacdb598-kube-api-access-gsnvh\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.695372 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f8g8\" (UniqueName: \"kubernetes.io/projected/e7b15a28-215c-46ab-b4c7-a46f5e8205ae-kube-api-access-9f8g8\") pod \"etcd-operator-b45778765-4qz94\" (UID: \"e7b15a28-215c-46ab-b4c7-a46f5e8205ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4qz94" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.713823 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkq9f\" (UniqueName: \"kubernetes.io/projected/86d8d0d4-69ef-439d-b516-01b8d02cf5ce-kube-api-access-tkq9f\") pod \"console-operator-58897d9998-nnvtr\" (UID: \"86d8d0d4-69ef-439d-b516-01b8d02cf5ce\") " pod="openshift-console-operator/console-operator-58897d9998-nnvtr" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.735349 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dv2f\" (UniqueName: \"kubernetes.io/projected/4f281370-6419-4dfb-b21f-9d1c9c7eddaa-kube-api-access-8dv2f\") pod \"console-f9d7485db-wjsth\" (UID: \"4f281370-6419-4dfb-b21f-9d1c9c7eddaa\") " pod="openshift-console/console-f9d7485db-wjsth" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.736111 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:28 crc kubenswrapper[4667]: E0131 03:50:28.736447 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:29.236433898 +0000 UTC m=+152.752769197 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.755840 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nnqh\" (UniqueName: \"kubernetes.io/projected/145e5e24-2f94-48b2-be05-b08dbbb09312-kube-api-access-9nnqh\") pod \"collect-profiles-29497185-9b5gp\" (UID: \"145e5e24-2f94-48b2-be05-b08dbbb09312\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497185-9b5gp" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.783734 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-527dl\" (UniqueName: \"kubernetes.io/projected/d426c096-b6d9-4696-8066-2b9ec75356af-kube-api-access-527dl\") pod \"marketplace-operator-79b997595-7pbrg\" (UID: \"d426c096-b6d9-4696-8066-2b9ec75356af\") " pod="openshift-marketplace/marketplace-operator-79b997595-7pbrg" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.800216 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npsrq\" (UniqueName: \"kubernetes.io/projected/dbbace8c-06bb-4b50-a132-a681482dc9e5-kube-api-access-npsrq\") pod \"control-plane-machine-set-operator-78cbb6b69f-stnvq\" (UID: \"dbbace8c-06bb-4b50-a132-a681482dc9e5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-stnvq" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.817328 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v87mm\" (UniqueName: \"kubernetes.io/projected/364c92fa-bb5a-428d-a999-eb6415d3f307-kube-api-access-v87mm\") pod \"service-ca-operator-777779d784-sjv9d\" (UID: \"364c92fa-bb5a-428d-a999-eb6415d3f307\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-sjv9d" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.837339 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.837926 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ksbp\" (UniqueName: \"kubernetes.io/projected/402d584b-6176-4cee-8e27-cc233b48feec-kube-api-access-4ksbp\") pod \"openshift-apiserver-operator-796bbdcf4f-mt595\" (UID: \"402d584b-6176-4cee-8e27-cc233b48feec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mt595" Jan 31 03:50:28 crc kubenswrapper[4667]: E0131 03:50:28.839287 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:29.339269763 +0000 UTC m=+152.855605062 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.842463 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ksbp\" (UniqueName: \"kubernetes.io/projected/402d584b-6176-4cee-8e27-cc233b48feec-kube-api-access-4ksbp\") pod \"openshift-apiserver-operator-796bbdcf4f-mt595\" (UID: \"402d584b-6176-4cee-8e27-cc233b48feec\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mt595" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.842656 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-wjsth" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.845146 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs2h4\" (UniqueName: \"kubernetes.io/projected/bc826929-768c-408f-a0f9-74bd29154340-kube-api-access-gs2h4\") pod \"catalog-operator-68c6474976-nrjrr\" (UID: \"bc826929-768c-408f-a0f9-74bd29154340\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nrjrr" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.853659 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-nnvtr" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.869715 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-4qz94" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.876299 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cmkr\" (UniqueName: \"kubernetes.io/projected/3dc6c6a9-3322-42db-a408-9a03c18a7531-kube-api-access-9cmkr\") pod \"csi-hostpathplugin-82j72\" (UID: \"3dc6c6a9-3322-42db-a408-9a03c18a7531\") " pod="hostpath-provisioner/csi-hostpathplugin-82j72" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.902373 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcjss\" (UniqueName: \"kubernetes.io/projected/d5757a8b-5d94-46d5-bc18-4a6c757a9ff2-kube-api-access-hcjss\") pod \"machine-config-operator-74547568cd-rs9cq\" (UID: \"d5757a8b-5d94-46d5-bc18-4a6c757a9ff2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rs9cq" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.905913 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89f42\" (UniqueName: \"kubernetes.io/projected/26736509-60d6-4b4e-94b2-1fef29fa0c91-kube-api-access-89f42\") pod \"kube-storage-version-migrator-operator-b67b599dd-nvj6q\" (UID: \"26736509-60d6-4b4e-94b2-1fef29fa0c91\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nvj6q" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.913416 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j2sj\" (UniqueName: \"kubernetes.io/projected/d44cf093-fd97-4adf-bdad-2c3fdb4157d7-kube-api-access-5j2sj\") pod \"dns-default-kfr9j\" (UID: \"d44cf093-fd97-4adf-bdad-2c3fdb4157d7\") " pod="openshift-dns/dns-default-kfr9j" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.930652 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6sz5\" (UniqueName: \"kubernetes.io/projected/f18d0f3d-32c1-40d3-99da-969208958cf4-kube-api-access-r6sz5\") pod \"multus-admission-controller-857f4d67dd-xzsnp\" (UID: \"f18d0f3d-32c1-40d3-99da-969208958cf4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xzsnp" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.938712 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nvj6q" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.940462 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:28 crc kubenswrapper[4667]: E0131 03:50:28.940779 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:29.440751993 +0000 UTC m=+152.957087292 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.947576 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-xzsnp" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.964072 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-sjv9d" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.979484 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/889f5458-9c03-4be0-8f99-848f68c3ecc8-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-9pdkp\" (UID: \"889f5458-9c03-4be0-8f99-848f68c3ecc8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9pdkp" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.983003 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87nnm\" (UniqueName: \"kubernetes.io/projected/5236f7ce-22c8-4283-9046-72fd92d2b7a7-kube-api-access-87nnm\") pod \"migrator-59844c95c7-czqrm\" (UID: \"5236f7ce-22c8-4283-9046-72fd92d2b7a7\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-czqrm" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.991446 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7pbrg" Jan 31 03:50:28 crc kubenswrapper[4667]: I0131 03:50:28.999671 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6r69\" (UniqueName: \"kubernetes.io/projected/56542c6f-259e-42a8-b62a-ea0ac38af319-kube-api-access-p6r69\") pod \"packageserver-d55dfcdfc-67p2b\" (UID: \"56542c6f-259e-42a8-b62a-ea0ac38af319\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-67p2b" Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.013558 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-ms8lf"] Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.015502 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hdtc\" (UniqueName: \"kubernetes.io/projected/83ec5b09-78b8-44c8-8530-2375417e0c97-kube-api-access-9hdtc\") pod \"machine-config-server-pgpmm\" (UID: \"83ec5b09-78b8-44c8-8530-2375417e0c97\") " pod="openshift-machine-config-operator/machine-config-server-pgpmm" Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.016521 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-stnvq" Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.026346 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rs9cq" Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.032987 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgkbw\" (UniqueName: \"kubernetes.io/projected/5608a012-09d1-4c57-9371-715625086e4d-kube-api-access-qgkbw\") pod \"ingress-canary-xf9cn\" (UID: \"5608a012-09d1-4c57-9371-715625086e4d\") " pod="openshift-ingress-canary/ingress-canary-xf9cn" Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.040237 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497185-9b5gp" Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.041196 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:29 crc kubenswrapper[4667]: E0131 03:50:29.041614 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:29.541595526 +0000 UTC m=+153.057930825 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.051607 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9pdkp" Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.054412 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0c33fc16-1215-438a-93e6-840ca5444e75-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-s9tvt\" (UID: \"0c33fc16-1215-438a-93e6-840ca5444e75\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s9tvt" Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.063767 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nrjrr" Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.073906 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/edfe2658-7a40-43db-a17b-72d1ea1fde3d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-lcnlf\" (UID: \"edfe2658-7a40-43db-a17b-72d1ea1fde3d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lcnlf" Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.084760 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-82j72" Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.091212 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-xf9cn" Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.099066 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-kfr9j" Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.103022 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-pgpmm" Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.103725 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftz5x\" (UniqueName: \"kubernetes.io/projected/9356361a-5000-468f-bb1d-17460cd2e9dc-kube-api-access-ftz5x\") pod \"machine-config-controller-84d6567774-j2rkz\" (UID: \"9356361a-5000-468f-bb1d-17460cd2e9dc\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j2rkz" Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.113157 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mt595" Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.117187 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlkfw\" (UniqueName: \"kubernetes.io/projected/18e6f9b3-4be1-4a96-9a05-f42b40f4c2fe-kube-api-access-rlkfw\") pod \"service-ca-9c57cc56f-dndtw\" (UID: \"18e6f9b3-4be1-4a96-9a05-f42b40f4c2fe\") " pod="openshift-service-ca/service-ca-9c57cc56f-dndtw" Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.136949 4667 csr.go:261] certificate signing request csr-d7jdh is approved, waiting to be issued Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.141100 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfxwj\" (UniqueName: \"kubernetes.io/projected/031121b1-b221-434c-91c0-d9b433cd6e7c-kube-api-access-dfxwj\") pod \"olm-operator-6b444d44fb-97sgp\" (UID: \"031121b1-b221-434c-91c0-d9b433cd6e7c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-97sgp" Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.143382 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:29 crc kubenswrapper[4667]: E0131 03:50:29.143915 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:29.643888387 +0000 UTC m=+153.160223686 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.147105 4667 csr.go:257] certificate signing request csr-d7jdh is issued Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.154180 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxdfh\" (UniqueName: \"kubernetes.io/projected/ed8b36d0-771d-48bb-9393-db864ff8ff84-kube-api-access-dxdfh\") pod \"router-default-5444994796-kvgs8\" (UID: \"ed8b36d0-771d-48bb-9393-db864ff8ff84\") " pod="openshift-ingress/router-default-5444994796-kvgs8" Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.176543 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd4qt\" (UniqueName: \"kubernetes.io/projected/9ce6e7c4-30b6-4812-8b50-bf81a13f7b9d-kube-api-access-hd4qt\") pod \"package-server-manager-789f6589d5-vsb97\" (UID: \"9ce6e7c4-30b6-4812-8b50-bf81a13f7b9d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vsb97" Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.191833 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ms8lf" event={"ID":"9af91113-a315-4416-a1f2-6566c16278cf","Type":"ContainerStarted","Data":"284af03feec40fb5b5d361c7197d6e8717684bd7aa75824f20a91fc0b5a749a9"} Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.210711 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4rfkg" event={"ID":"4dd56584-ddc5-48e9-be73-9758dca8dddf","Type":"ContainerStarted","Data":"833b00796ec68c6cd70099c4dec1ad93e83eb7ce365ffc8c0a0a6fedc348c29d"} Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.215328 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lcnlf" Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.221420 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-97sgp" Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.229547 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-czqrm" Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.234524 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-sbbxx" event={"ID":"0400a903-d02e-41b4-99f3-3c7b57744839","Type":"ContainerStarted","Data":"b96c434ce0b01c7563f7985021e0cb855b1aa1f9279bab86cee48404a74aaefb"} Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.234587 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-sbbxx" event={"ID":"0400a903-d02e-41b4-99f3-3c7b57744839","Type":"ContainerStarted","Data":"6a5e166c81dd66c5ff22e25cd901dccd5efa5b8e89d162be0cc158ac06aff32b"} Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.247325 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:29 crc kubenswrapper[4667]: E0131 03:50:29.250217 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:29.750193552 +0000 UTC m=+153.266528851 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.264241 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-kvgs8" Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.268875 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bclzx" event={"ID":"c2cef73f-5410-499e-ae70-491c866c1b48","Type":"ContainerStarted","Data":"f3010bf3edfed593d25be3133dc2924bd809d55d3dd854caedef778ecdd5b321"} Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.268961 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bclzx" event={"ID":"c2cef73f-5410-499e-ae70-491c866c1b48","Type":"ContainerStarted","Data":"f4712af4e2667394a1ce627228fd5eec2d37485186fa72aeb3ec532f87596928"} Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.268978 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bclzx" event={"ID":"c2cef73f-5410-499e-ae70-491c866c1b48","Type":"ContainerStarted","Data":"9cf6a5863fd86560d25b004161945013768e2887104e26f0e052fd9ba703215d"} Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.270786 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j2rkz" Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.281843 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-67p2b" Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.298421 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vsb97" Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.309392 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-dndtw" Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.315887 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" event={"ID":"21469e62-0345-41f0-a07b-eac67df38faf","Type":"ContainerStarted","Data":"a5ca1121b9f8156bdbdb88c33f3f31395dcd7420156ff737b9cded03e3febfb9"} Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.325640 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.323096 4667 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-dmtcm container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.28:6443/healthz\": dial tcp 10.217.0.28:6443: connect: connection refused" start-of-body= Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.325767 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" podUID="21469e62-0345-41f0-a07b-eac67df38faf" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.28:6443/healthz\": dial tcp 10.217.0.28:6443: connect: connection refused" Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.325717 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" event={"ID":"21469e62-0345-41f0-a07b-eac67df38faf","Type":"ContainerStarted","Data":"f5fccc3b129ee1d13e7631cbec910946268d73dd54d2da22948f6411e6dcf949"} Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.323379 4667 patch_prober.go:28] interesting pod/downloads-7954f5f757-5zj2q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" start-of-body= Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.326029 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5zj2q" podUID="745b1e30-1f16-4539-847b-88db36eb6d4b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.336399 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5nl2" Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.339117 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-gf8vs" Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.344138 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s9tvt" Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.355995 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:29 crc kubenswrapper[4667]: E0131 03:50:29.362131 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:29.862100253 +0000 UTC m=+153.378435552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.367717 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-4qz94"] Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.459276 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:29 crc kubenswrapper[4667]: E0131 03:50:29.470437 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:29.97041669 +0000 UTC m=+153.486751979 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.578605 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a49a8a9-82b6-4374-a43a-224f2f9e14a4-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-xhcb5\" (UID: \"3a49a8a9-82b6-4374-a43a-224f2f9e14a4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xhcb5" Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.578654 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b119a43-b446-4226-9490-a7ba5baf2815-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rzjpv\" (UID: \"9b119a43-b446-4226-9490-a7ba5baf2815\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzjpv" Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.578723 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.578743 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cbf59768-adfb-48f6-b68b-ebf1675f1807-etcd-client\") pod \"apiserver-76f77b778f-7txvq\" (UID: \"cbf59768-adfb-48f6-b68b-ebf1675f1807\") " pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.586168 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-nnvtr"] Jan 31 03:50:29 crc kubenswrapper[4667]: E0131 03:50:29.586441 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:30.086429708 +0000 UTC m=+153.602765007 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.590007 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cbf59768-adfb-48f6-b68b-ebf1675f1807-etcd-client\") pod \"apiserver-76f77b778f-7txvq\" (UID: \"cbf59768-adfb-48f6-b68b-ebf1675f1807\") " pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.590172 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a49a8a9-82b6-4374-a43a-224f2f9e14a4-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-xhcb5\" (UID: \"3a49a8a9-82b6-4374-a43a-224f2f9e14a4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xhcb5" Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.590290 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b119a43-b446-4226-9490-a7ba5baf2815-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rzjpv\" (UID: \"9b119a43-b446-4226-9490-a7ba5baf2815\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzjpv" Jan 31 03:50:29 crc kubenswrapper[4667]: W0131 03:50:29.665257 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86d8d0d4_69ef_439d_b516_01b8d02cf5ce.slice/crio-310531a58fcbcd7b5357984aa0cd32daa9dca1f5300e446344b03da1db4500a5 WatchSource:0}: Error finding container 310531a58fcbcd7b5357984aa0cd32daa9dca1f5300e446344b03da1db4500a5: Status 404 returned error can't find the container with id 310531a58fcbcd7b5357984aa0cd32daa9dca1f5300e446344b03da1db4500a5 Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.675242 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzjpv" Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.680300 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:29 crc kubenswrapper[4667]: E0131 03:50:29.680665 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:30.180650349 +0000 UTC m=+153.696985648 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.692156 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-xhcb5" Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.757024 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-wjsth"] Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.758226 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.784777 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:29 crc kubenswrapper[4667]: E0131 03:50:29.785100 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:30.285090445 +0000 UTC m=+153.801425744 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.809886 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-xzsnp"] Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.842462 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-gf8vs" podStartSLOduration=126.842446107 podStartE2EDuration="2m6.842446107s" podCreationTimestamp="2026-01-31 03:48:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:29.839334376 +0000 UTC m=+153.355669675" watchObservedRunningTime="2026-01-31 03:50:29.842446107 +0000 UTC m=+153.358781406" Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.842748 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-stnvq"] Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.893185 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:29 crc kubenswrapper[4667]: E0131 03:50:29.893595 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:30.393580637 +0000 UTC m=+153.909915936 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.921894 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4rfkg" podStartSLOduration=126.921880853 podStartE2EDuration="2m6.921880853s" podCreationTimestamp="2026-01-31 03:48:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:29.920713033 +0000 UTC m=+153.437048332" watchObservedRunningTime="2026-01-31 03:50:29.921880853 +0000 UTC m=+153.438216142" Jan 31 03:50:29 crc kubenswrapper[4667]: I0131 03:50:29.997728 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:29 crc kubenswrapper[4667]: E0131 03:50:29.998194 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:30.498166217 +0000 UTC m=+154.014501516 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:30 crc kubenswrapper[4667]: I0131 03:50:30.020093 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nvj6q"] Jan 31 03:50:30 crc kubenswrapper[4667]: I0131 03:50:30.069222 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-sjv9d"] Jan 31 03:50:30 crc kubenswrapper[4667]: I0131 03:50:30.099663 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:30 crc kubenswrapper[4667]: E0131 03:50:30.100065 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:30.600048087 +0000 UTC m=+154.116383386 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:30 crc kubenswrapper[4667]: I0131 03:50:30.114807 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-q74f5" podStartSLOduration=127.114791221 podStartE2EDuration="2m7.114791221s" podCreationTimestamp="2026-01-31 03:48:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:30.113125078 +0000 UTC m=+153.629460377" watchObservedRunningTime="2026-01-31 03:50:30.114791221 +0000 UTC m=+153.631126520" Jan 31 03:50:30 crc kubenswrapper[4667]: I0131 03:50:30.166063 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-31 03:45:29 +0000 UTC, rotation deadline is 2026-12-19 15:47:55.216908662 +0000 UTC Jan 31 03:50:30 crc kubenswrapper[4667]: I0131 03:50:30.166114 4667 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7739h57m25.050796496s for next certificate rotation Jan 31 03:50:30 crc kubenswrapper[4667]: I0131 03:50:30.203192 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:30 crc kubenswrapper[4667]: E0131 03:50:30.203671 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:30.703656202 +0000 UTC m=+154.219991501 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:30 crc kubenswrapper[4667]: I0131 03:50:30.216145 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pfnkq" podStartSLOduration=128.216130317 podStartE2EDuration="2m8.216130317s" podCreationTimestamp="2026-01-31 03:48:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:30.166613249 +0000 UTC m=+153.682948548" watchObservedRunningTime="2026-01-31 03:50:30.216130317 +0000 UTC m=+153.732465616" Jan 31 03:50:30 crc kubenswrapper[4667]: I0131 03:50:30.229835 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bclzx" podStartSLOduration=127.229822423 podStartE2EDuration="2m7.229822423s" podCreationTimestamp="2026-01-31 03:48:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:30.21779791 +0000 UTC m=+153.734133199" watchObservedRunningTime="2026-01-31 03:50:30.229822423 +0000 UTC m=+153.746157722" Jan 31 03:50:30 crc kubenswrapper[4667]: I0131 03:50:30.230840 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7pbrg"] Jan 31 03:50:30 crc kubenswrapper[4667]: I0131 03:50:30.251887 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497185-9b5gp"] Jan 31 03:50:30 crc kubenswrapper[4667]: I0131 03:50:30.306415 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:30 crc kubenswrapper[4667]: E0131 03:50:30.307173 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:30.807158495 +0000 UTC m=+154.323493794 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:30 crc kubenswrapper[4667]: I0131 03:50:30.356028 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-4qz94" event={"ID":"e7b15a28-215c-46ab-b4c7-a46f5e8205ae","Type":"ContainerStarted","Data":"859dc27175629caedf16776d54a31a898fef238f0067ff43e1e0b47af150d972"} Jan 31 03:50:30 crc kubenswrapper[4667]: I0131 03:50:30.411035 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:30 crc kubenswrapper[4667]: E0131 03:50:30.411496 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:30.911484988 +0000 UTC m=+154.427820287 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:30 crc kubenswrapper[4667]: I0131 03:50:30.421256 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nvj6q" event={"ID":"26736509-60d6-4b4e-94b2-1fef29fa0c91","Type":"ContainerStarted","Data":"58916d1a636c81e629f2e19a814099b62c22044e7650e1111afcec1a7bebe6ca"} Jan 31 03:50:30 crc kubenswrapper[4667]: I0131 03:50:30.458158 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-sbbxx" event={"ID":"0400a903-d02e-41b4-99f3-3c7b57744839","Type":"ContainerStarted","Data":"0e143bc3c926f39b91fb5d7830ea8e6a18265af65c4c9ae637195bb53a998795"} Jan 31 03:50:30 crc kubenswrapper[4667]: I0131 03:50:30.505810 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-pgpmm" event={"ID":"83ec5b09-78b8-44c8-8530-2375417e0c97","Type":"ContainerStarted","Data":"3a87e81b1329841055512bd550bd6849fe6b7237cd9b2903359ab46b3885b4cb"} Jan 31 03:50:30 crc kubenswrapper[4667]: I0131 03:50:30.505854 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-pgpmm" event={"ID":"83ec5b09-78b8-44c8-8530-2375417e0c97","Type":"ContainerStarted","Data":"ad7059f2f3121a656c4ae0232375f47afcd585f3546eb7108173b4f9a2e63741"} Jan 31 03:50:30 crc kubenswrapper[4667]: I0131 03:50:30.514167 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:30 crc kubenswrapper[4667]: E0131 03:50:30.515127 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:31.015100233 +0000 UTC m=+154.531435532 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:30 crc kubenswrapper[4667]: I0131 03:50:30.535985 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-kvgs8" event={"ID":"ed8b36d0-771d-48bb-9393-db864ff8ff84","Type":"ContainerStarted","Data":"4628bdbba034a9d9480f0de36fc1e252b077a4e4a791fa298c5fa4e594050212"} Jan 31 03:50:30 crc kubenswrapper[4667]: I0131 03:50:30.565662 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-stnvq" event={"ID":"dbbace8c-06bb-4b50-a132-a681482dc9e5","Type":"ContainerStarted","Data":"2375c0ec8f1830e11003d6bea051ada440ef6661ac5d3e1add8c12655c1e675e"} Jan 31 03:50:30 crc kubenswrapper[4667]: I0131 03:50:30.596153 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-sjv9d" event={"ID":"364c92fa-bb5a-428d-a999-eb6415d3f307","Type":"ContainerStarted","Data":"63537e6e1add159432e0f509f17c88a6beeb12893fc0ce9c4a5f0f97876e6902"} Jan 31 03:50:30 crc kubenswrapper[4667]: I0131 03:50:30.615581 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:30 crc kubenswrapper[4667]: E0131 03:50:30.615831 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:31.115822003 +0000 UTC m=+154.632157302 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:30 crc kubenswrapper[4667]: I0131 03:50:30.628508 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ms8lf" event={"ID":"9af91113-a315-4416-a1f2-6566c16278cf","Type":"ContainerStarted","Data":"5c28aade74b8bab893528fcac276c9658b673a4afd15ea5770f0961052b58e74"} Jan 31 03:50:30 crc kubenswrapper[4667]: I0131 03:50:30.645858 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-wjsth" event={"ID":"4f281370-6419-4dfb-b21f-9d1c9c7eddaa","Type":"ContainerStarted","Data":"ec5eda3ca334e4fe609b32378cf754c8b3faf595211461fbc3e6d836a8f1a033"} Jan 31 03:50:30 crc kubenswrapper[4667]: I0131 03:50:30.646940 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-nnvtr" event={"ID":"86d8d0d4-69ef-439d-b516-01b8d02cf5ce","Type":"ContainerStarted","Data":"310531a58fcbcd7b5357984aa0cd32daa9dca1f5300e446344b03da1db4500a5"} Jan 31 03:50:30 crc kubenswrapper[4667]: I0131 03:50:30.655997 4667 patch_prober.go:28] interesting pod/downloads-7954f5f757-5zj2q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" start-of-body= Jan 31 03:50:30 crc kubenswrapper[4667]: I0131 03:50:30.655983 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-xzsnp" event={"ID":"f18d0f3d-32c1-40d3-99da-969208958cf4","Type":"ContainerStarted","Data":"caabac6b8c4e5c13d1393a5490868c42823e4152688144fd003825cc12a0de34"} Jan 31 03:50:30 crc kubenswrapper[4667]: I0131 03:50:30.656069 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5zj2q" podUID="745b1e30-1f16-4539-847b-88db36eb6d4b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" Jan 31 03:50:30 crc kubenswrapper[4667]: I0131 03:50:30.669112 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jkdkh" podStartSLOduration=128.669091279 podStartE2EDuration="2m8.669091279s" podCreationTimestamp="2026-01-31 03:48:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:30.667803925 +0000 UTC m=+154.184139224" watchObservedRunningTime="2026-01-31 03:50:30.669091279 +0000 UTC m=+154.185426578" Jan 31 03:50:30 crc kubenswrapper[4667]: I0131 03:50:30.718424 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:30 crc kubenswrapper[4667]: E0131 03:50:30.720874 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:31.220837855 +0000 UTC m=+154.737173154 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:30 crc kubenswrapper[4667]: I0131 03:50:30.827073 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:30 crc kubenswrapper[4667]: E0131 03:50:30.829469 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:31.32945404 +0000 UTC m=+154.845789329 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:30 crc kubenswrapper[4667]: I0131 03:50:30.938893 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:30 crc kubenswrapper[4667]: E0131 03:50:30.939579 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:31.439566483 +0000 UTC m=+154.955901782 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:30 crc kubenswrapper[4667]: I0131 03:50:30.989495 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-5zj2q" podStartSLOduration=128.989478832 podStartE2EDuration="2m8.989478832s" podCreationTimestamp="2026-01-31 03:48:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:30.95368416 +0000 UTC m=+154.470019459" watchObservedRunningTime="2026-01-31 03:50:30.989478832 +0000 UTC m=+154.505814131" Jan 31 03:50:31 crc kubenswrapper[4667]: I0131 03:50:31.043806 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:31 crc kubenswrapper[4667]: E0131 03:50:31.048856 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:31.548819705 +0000 UTC m=+155.065155004 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:31 crc kubenswrapper[4667]: I0131 03:50:31.058559 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5nl2" podStartSLOduration=128.058541768 podStartE2EDuration="2m8.058541768s" podCreationTimestamp="2026-01-31 03:48:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:30.990852257 +0000 UTC m=+154.507187556" watchObservedRunningTime="2026-01-31 03:50:31.058541768 +0000 UTC m=+154.574877067" Jan 31 03:50:31 crc kubenswrapper[4667]: I0131 03:50:31.060367 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" podStartSLOduration=129.060359715 podStartE2EDuration="2m9.060359715s" podCreationTimestamp="2026-01-31 03:48:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:31.057514291 +0000 UTC m=+154.573849590" watchObservedRunningTime="2026-01-31 03:50:31.060359715 +0000 UTC m=+154.576695014" Jan 31 03:50:31 crc kubenswrapper[4667]: I0131 03:50:31.145523 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:31 crc kubenswrapper[4667]: E0131 03:50:31.145939 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:31.645926221 +0000 UTC m=+155.162261520 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:31 crc kubenswrapper[4667]: I0131 03:50:31.228568 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-zpjcj" podStartSLOduration=128.22854613 podStartE2EDuration="2m8.22854613s" podCreationTimestamp="2026-01-31 03:48:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:31.162162583 +0000 UTC m=+154.678497882" watchObservedRunningTime="2026-01-31 03:50:31.22854613 +0000 UTC m=+154.744881429" Jan 31 03:50:31 crc kubenswrapper[4667]: I0131 03:50:31.247369 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:31 crc kubenswrapper[4667]: E0131 03:50:31.248103 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:31.748074228 +0000 UTC m=+155.264409527 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:31 crc kubenswrapper[4667]: I0131 03:50:31.250220 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-kfr9j"] Jan 31 03:50:31 crc kubenswrapper[4667]: I0131 03:50:31.348884 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:31 crc kubenswrapper[4667]: E0131 03:50:31.349305 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:31.8492879 +0000 UTC m=+155.365623199 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:31 crc kubenswrapper[4667]: I0131 03:50:31.457837 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:31 crc kubenswrapper[4667]: E0131 03:50:31.458183 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:31.958173233 +0000 UTC m=+155.474508532 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:31 crc kubenswrapper[4667]: I0131 03:50:31.558763 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:31 crc kubenswrapper[4667]: E0131 03:50:31.559121 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:32.059106058 +0000 UTC m=+155.575441357 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:31 crc kubenswrapper[4667]: I0131 03:50:31.576504 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-pgpmm" podStartSLOduration=6.57648395 podStartE2EDuration="6.57648395s" podCreationTimestamp="2026-01-31 03:50:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:31.540919945 +0000 UTC m=+155.057255244" watchObservedRunningTime="2026-01-31 03:50:31.57648395 +0000 UTC m=+155.092819249" Jan 31 03:50:31 crc kubenswrapper[4667]: I0131 03:50:31.638923 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-sbbxx" podStartSLOduration=128.638908054 podStartE2EDuration="2m8.638908054s" podCreationTimestamp="2026-01-31 03:48:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:31.579006006 +0000 UTC m=+155.095341305" watchObservedRunningTime="2026-01-31 03:50:31.638908054 +0000 UTC m=+155.155243343" Jan 31 03:50:31 crc kubenswrapper[4667]: I0131 03:50:31.641287 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vsb97"] Jan 31 03:50:31 crc kubenswrapper[4667]: I0131 03:50:31.641318 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rs9cq"] Jan 31 03:50:31 crc kubenswrapper[4667]: I0131 03:50:31.666418 4667 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-dmtcm container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.28:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 03:50:31 crc kubenswrapper[4667]: I0131 03:50:31.666893 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" podUID="21469e62-0345-41f0-a07b-eac67df38faf" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.28:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 03:50:31 crc kubenswrapper[4667]: I0131 03:50:31.668151 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:31 crc kubenswrapper[4667]: E0131 03:50:31.668414 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:32.168402821 +0000 UTC m=+155.684738120 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:31 crc kubenswrapper[4667]: I0131 03:50:31.691067 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nrjrr"] Jan 31 03:50:31 crc kubenswrapper[4667]: I0131 03:50:31.764516 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-nnvtr" event={"ID":"86d8d0d4-69ef-439d-b516-01b8d02cf5ce","Type":"ContainerStarted","Data":"ce124bd88a58ed0731bfec2cff0c2ff87df4b04a8409f0d179e2e1010048f1ea"} Jan 31 03:50:31 crc kubenswrapper[4667]: I0131 03:50:31.764557 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-nnvtr" Jan 31 03:50:31 crc kubenswrapper[4667]: I0131 03:50:31.770305 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:31 crc kubenswrapper[4667]: E0131 03:50:31.770677 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:32.270662491 +0000 UTC m=+155.786997790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:31 crc kubenswrapper[4667]: I0131 03:50:31.775344 4667 patch_prober.go:28] interesting pod/console-operator-58897d9998-nnvtr container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/readyz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 31 03:50:31 crc kubenswrapper[4667]: I0131 03:50:31.775392 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-nnvtr" podUID="86d8d0d4-69ef-439d-b516-01b8d02cf5ce" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/readyz\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 31 03:50:31 crc kubenswrapper[4667]: I0131 03:50:31.811166 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-kfr9j" event={"ID":"d44cf093-fd97-4adf-bdad-2c3fdb4157d7","Type":"ContainerStarted","Data":"2be44b5f0121363b6f759c1f43f51662406861140de5b09eb47f049f32eed2eb"} Jan 31 03:50:31 crc kubenswrapper[4667]: I0131 03:50:31.829123 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-kvgs8" event={"ID":"ed8b36d0-771d-48bb-9393-db864ff8ff84","Type":"ContainerStarted","Data":"52d608a795a756f5335db807bc0e0f854a9de690958f3286b295ed4ddb76510b"} Jan 31 03:50:31 crc kubenswrapper[4667]: I0131 03:50:31.869531 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497185-9b5gp" event={"ID":"145e5e24-2f94-48b2-be05-b08dbbb09312","Type":"ContainerStarted","Data":"bd4582241bfb08235ad49f9224238c8abad1554ad5555edf41cc3df1d03882a8"} Jan 31 03:50:31 crc kubenswrapper[4667]: I0131 03:50:31.869581 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497185-9b5gp" event={"ID":"145e5e24-2f94-48b2-be05-b08dbbb09312","Type":"ContainerStarted","Data":"88e41582cab244056dd314db637c414b19dfac8944186154d07e400f31d6b0c0"} Jan 31 03:50:31 crc kubenswrapper[4667]: I0131 03:50:31.876557 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:31 crc kubenswrapper[4667]: E0131 03:50:31.878538 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:32.378519506 +0000 UTC m=+155.894854805 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:31 crc kubenswrapper[4667]: I0131 03:50:31.879481 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-sjv9d" event={"ID":"364c92fa-bb5a-428d-a999-eb6415d3f307","Type":"ContainerStarted","Data":"24b4d79338d8d3df15b6caa3abe9b11dd2b7edfb02f812bf1795dcb819ac0b72"} Jan 31 03:50:31 crc kubenswrapper[4667]: I0131 03:50:31.889245 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9pdkp"] Jan 31 03:50:31 crc kubenswrapper[4667]: I0131 03:50:31.919351 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-97sgp"] Jan 31 03:50:31 crc kubenswrapper[4667]: I0131 03:50:31.941285 4667 generic.go:334] "Generic (PLEG): container finished" podID="9af91113-a315-4416-a1f2-6566c16278cf" containerID="5c28aade74b8bab893528fcac276c9658b673a4afd15ea5770f0961052b58e74" exitCode=0 Jan 31 03:50:31 crc kubenswrapper[4667]: I0131 03:50:31.941385 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ms8lf" event={"ID":"9af91113-a315-4416-a1f2-6566c16278cf","Type":"ContainerDied","Data":"5c28aade74b8bab893528fcac276c9658b673a4afd15ea5770f0961052b58e74"} Jan 31 03:50:31 crc kubenswrapper[4667]: I0131 03:50:31.941415 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ms8lf" event={"ID":"9af91113-a315-4416-a1f2-6566c16278cf","Type":"ContainerStarted","Data":"8ba9022405bbc6d413122d8f31cd828a9482f36c76987ba4925c609807c961dc"} Jan 31 03:50:31 crc kubenswrapper[4667]: I0131 03:50:31.942020 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ms8lf" Jan 31 03:50:31 crc kubenswrapper[4667]: I0131 03:50:31.947020 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-j2rkz"] Jan 31 03:50:31 crc kubenswrapper[4667]: I0131 03:50:31.963137 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nvj6q" event={"ID":"26736509-60d6-4b4e-94b2-1fef29fa0c91","Type":"ContainerStarted","Data":"ba91b20e03299e7bf6d9230d4b9c32d1e2c207a126fdcc6aef6012337ef48b73"} Jan 31 03:50:31 crc kubenswrapper[4667]: I0131 03:50:31.974945 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lcnlf"] Jan 31 03:50:31 crc kubenswrapper[4667]: I0131 03:50:31.979001 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:31 crc kubenswrapper[4667]: E0131 03:50:31.980774 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:32.480741375 +0000 UTC m=+155.997076674 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:31 crc kubenswrapper[4667]: I0131 03:50:31.991704 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-4qz94" event={"ID":"e7b15a28-215c-46ab-b4c7-a46f5e8205ae","Type":"ContainerStarted","Data":"35b4a46c42abeeed38d4b976dcc2f1c36cf2c3069504754e2928f027abb009f8"} Jan 31 03:50:32 crc kubenswrapper[4667]: I0131 03:50:32.041087 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-kvgs8" podStartSLOduration=129.041066634 podStartE2EDuration="2m9.041066634s" podCreationTimestamp="2026-01-31 03:48:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:32.040434478 +0000 UTC m=+155.556769787" watchObservedRunningTime="2026-01-31 03:50:32.041066634 +0000 UTC m=+155.557401923" Jan 31 03:50:32 crc kubenswrapper[4667]: I0131 03:50:32.042965 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-xzsnp" event={"ID":"f18d0f3d-32c1-40d3-99da-969208958cf4","Type":"ContainerStarted","Data":"f0aacfad3f654b4da8a2644234834dbf53bac1fc2f3b6e1456840cd85eb67418"} Jan 31 03:50:32 crc kubenswrapper[4667]: I0131 03:50:32.070763 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-wjsth" event={"ID":"4f281370-6419-4dfb-b21f-9d1c9c7eddaa","Type":"ContainerStarted","Data":"8e0b5bceb97157464b0e39472d0fb5e7c96918020db6f8b97a4b753317739cd6"} Jan 31 03:50:32 crc kubenswrapper[4667]: I0131 03:50:32.080557 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:32 crc kubenswrapper[4667]: E0131 03:50:32.084021 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:32.584008851 +0000 UTC m=+156.100344150 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:32 crc kubenswrapper[4667]: I0131 03:50:32.085660 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-stnvq" event={"ID":"dbbace8c-06bb-4b50-a132-a681482dc9e5","Type":"ContainerStarted","Data":"529cd3b272107fa1861bee54e75a1019c8f65c20cd2c5ab4fcdc6fee92c178be"} Jan 31 03:50:32 crc kubenswrapper[4667]: I0131 03:50:32.099952 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7pbrg" event={"ID":"d426c096-b6d9-4696-8066-2b9ec75356af","Type":"ContainerStarted","Data":"e94d4e8ba1eb248eeb1951f7ff5f115f37d3247fe9c66b1b88da95e50014b0ec"} Jan 31 03:50:32 crc kubenswrapper[4667]: I0131 03:50:32.099985 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7pbrg" event={"ID":"d426c096-b6d9-4696-8066-2b9ec75356af","Type":"ContainerStarted","Data":"5ef4cf80add0ba8ce141d7ebc9a980bca9437d86d568ba96f6a0cf0d62a4c2b1"} Jan 31 03:50:32 crc kubenswrapper[4667]: I0131 03:50:32.099999 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-7pbrg" Jan 31 03:50:32 crc kubenswrapper[4667]: I0131 03:50:32.102967 4667 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7pbrg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Jan 31 03:50:32 crc kubenswrapper[4667]: I0131 03:50:32.103259 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7pbrg" podUID="d426c096-b6d9-4696-8066-2b9ec75356af" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Jan 31 03:50:32 crc kubenswrapper[4667]: W0131 03:50:32.103851 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9356361a_5000_468f_bb1d_17460cd2e9dc.slice/crio-e0accbfce00852e72383c52f80abd7368ff6c7dea449f7803ca5ca6e76db51b3 WatchSource:0}: Error finding container e0accbfce00852e72383c52f80abd7368ff6c7dea449f7803ca5ca6e76db51b3: Status 404 returned error can't find the container with id e0accbfce00852e72383c52f80abd7368ff6c7dea449f7803ca5ca6e76db51b3 Jan 31 03:50:32 crc kubenswrapper[4667]: I0131 03:50:32.137742 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-dndtw"] Jan 31 03:50:32 crc kubenswrapper[4667]: I0131 03:50:32.149040 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nvj6q" podStartSLOduration=129.13814969 podStartE2EDuration="2m9.13814969s" podCreationTimestamp="2026-01-31 03:48:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:32.126303731 +0000 UTC m=+155.642639030" watchObservedRunningTime="2026-01-31 03:50:32.13814969 +0000 UTC m=+155.654484989" Jan 31 03:50:32 crc kubenswrapper[4667]: I0131 03:50:32.154967 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-xhcb5"] Jan 31 03:50:32 crc kubenswrapper[4667]: I0131 03:50:32.158130 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-czqrm"] Jan 31 03:50:32 crc kubenswrapper[4667]: I0131 03:50:32.158172 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mt595"] Jan 31 03:50:32 crc kubenswrapper[4667]: I0131 03:50:32.167908 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29497185-9b5gp" podStartSLOduration=130.167889333 podStartE2EDuration="2m10.167889333s" podCreationTimestamp="2026-01-31 03:48:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:32.166537428 +0000 UTC m=+155.682872727" watchObservedRunningTime="2026-01-31 03:50:32.167889333 +0000 UTC m=+155.684224632" Jan 31 03:50:32 crc kubenswrapper[4667]: I0131 03:50:32.182020 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:32 crc kubenswrapper[4667]: E0131 03:50:32.182608 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:32.682582335 +0000 UTC m=+156.198917634 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:32 crc kubenswrapper[4667]: I0131 03:50:32.194496 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-4qz94" podStartSLOduration=129.194479045 podStartE2EDuration="2m9.194479045s" podCreationTimestamp="2026-01-31 03:48:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:32.193732035 +0000 UTC m=+155.710067334" watchObservedRunningTime="2026-01-31 03:50:32.194479045 +0000 UTC m=+155.710814344" Jan 31 03:50:32 crc kubenswrapper[4667]: I0131 03:50:32.216944 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4rfkg" Jan 31 03:50:32 crc kubenswrapper[4667]: I0131 03:50:32.217819 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4rfkg" Jan 31 03:50:32 crc kubenswrapper[4667]: I0131 03:50:32.252351 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4rfkg" Jan 31 03:50:32 crc kubenswrapper[4667]: I0131 03:50:32.267047 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ms8lf" podStartSLOduration=130.267030082 podStartE2EDuration="2m10.267030082s" podCreationTimestamp="2026-01-31 03:48:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:32.26311988 +0000 UTC m=+155.779455179" watchObservedRunningTime="2026-01-31 03:50:32.267030082 +0000 UTC m=+155.783365381" Jan 31 03:50:32 crc kubenswrapper[4667]: I0131 03:50:32.270418 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-kvgs8" Jan 31 03:50:32 crc kubenswrapper[4667]: I0131 03:50:32.283483 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:32 crc kubenswrapper[4667]: E0131 03:50:32.283742 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:32.783732286 +0000 UTC m=+156.300067585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:32 crc kubenswrapper[4667]: I0131 03:50:32.298445 4667 patch_prober.go:28] interesting pod/router-default-5444994796-kvgs8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 03:50:32 crc kubenswrapper[4667]: [-]has-synced failed: reason withheld Jan 31 03:50:32 crc kubenswrapper[4667]: [+]process-running ok Jan 31 03:50:32 crc kubenswrapper[4667]: healthz check failed Jan 31 03:50:32 crc kubenswrapper[4667]: I0131 03:50:32.298499 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kvgs8" podUID="ed8b36d0-771d-48bb-9393-db864ff8ff84" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 03:50:32 crc kubenswrapper[4667]: I0131 03:50:32.321231 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-sjv9d" podStartSLOduration=129.321212151 podStartE2EDuration="2m9.321212151s" podCreationTimestamp="2026-01-31 03:48:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:32.320057231 +0000 UTC m=+155.836392530" watchObservedRunningTime="2026-01-31 03:50:32.321212151 +0000 UTC m=+155.837547460" Jan 31 03:50:32 crc kubenswrapper[4667]: W0131 03:50:32.342975 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5236f7ce_22c8_4283_9046_72fd92d2b7a7.slice/crio-b235ae38e0c3e5f6c1e47fce19739bb8ce3cec362324b3dfc53e38c750e323d8 WatchSource:0}: Error finding container b235ae38e0c3e5f6c1e47fce19739bb8ce3cec362324b3dfc53e38c750e323d8: Status 404 returned error can't find the container with id b235ae38e0c3e5f6c1e47fce19739bb8ce3cec362324b3dfc53e38c750e323d8 Jan 31 03:50:32 crc kubenswrapper[4667]: I0131 03:50:32.350615 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-xf9cn"] Jan 31 03:50:32 crc kubenswrapper[4667]: I0131 03:50:32.384284 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:32 crc kubenswrapper[4667]: E0131 03:50:32.385521 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:32.885504714 +0000 UTC m=+156.401840013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:32 crc kubenswrapper[4667]: I0131 03:50:32.400915 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s9tvt"] Jan 31 03:50:32 crc kubenswrapper[4667]: I0131 03:50:32.401774 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-7txvq"] Jan 31 03:50:32 crc kubenswrapper[4667]: I0131 03:50:32.490515 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:32 crc kubenswrapper[4667]: E0131 03:50:32.491174 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:32.991162592 +0000 UTC m=+156.507497891 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:32 crc kubenswrapper[4667]: W0131 03:50:32.527478 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbf59768_adfb_48f6_b68b_ebf1675f1807.slice/crio-2027917985fd22ac934a79f133c32b36220cc70f4d68aa0678114507db8e0126 WatchSource:0}: Error finding container 2027917985fd22ac934a79f133c32b36220cc70f4d68aa0678114507db8e0126: Status 404 returned error can't find the container with id 2027917985fd22ac934a79f133c32b36220cc70f4d68aa0678114507db8e0126 Jan 31 03:50:32 crc kubenswrapper[4667]: I0131 03:50:32.541867 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-nnvtr" podStartSLOduration=130.54183327 podStartE2EDuration="2m10.54183327s" podCreationTimestamp="2026-01-31 03:48:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:32.478189804 +0000 UTC m=+155.994525103" watchObservedRunningTime="2026-01-31 03:50:32.54183327 +0000 UTC m=+156.058168569" Jan 31 03:50:32 crc kubenswrapper[4667]: I0131 03:50:32.542572 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzjpv"] Jan 31 03:50:32 crc kubenswrapper[4667]: I0131 03:50:32.555328 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-82j72"] Jan 31 03:50:32 crc kubenswrapper[4667]: I0131 03:50:32.592379 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:32 crc kubenswrapper[4667]: E0131 03:50:32.593082 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:33.093064512 +0000 UTC m=+156.609399811 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:32 crc kubenswrapper[4667]: I0131 03:50:32.593204 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-wjsth" podStartSLOduration=130.593183275 podStartE2EDuration="2m10.593183275s" podCreationTimestamp="2026-01-31 03:48:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:32.591547783 +0000 UTC m=+156.107883072" watchObservedRunningTime="2026-01-31 03:50:32.593183275 +0000 UTC m=+156.109518574" Jan 31 03:50:32 crc kubenswrapper[4667]: I0131 03:50:32.621486 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-67p2b"] Jan 31 03:50:32 crc kubenswrapper[4667]: I0131 03:50:32.694897 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:32 crc kubenswrapper[4667]: E0131 03:50:32.695222 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:33.195210669 +0000 UTC m=+156.711545968 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:32 crc kubenswrapper[4667]: I0131 03:50:32.747410 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-stnvq" podStartSLOduration=129.747392666 podStartE2EDuration="2m9.747392666s" podCreationTimestamp="2026-01-31 03:48:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:32.732253623 +0000 UTC m=+156.248588922" watchObservedRunningTime="2026-01-31 03:50:32.747392666 +0000 UTC m=+156.263727965" Jan 31 03:50:32 crc kubenswrapper[4667]: I0131 03:50:32.796313 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:32 crc kubenswrapper[4667]: E0131 03:50:32.796651 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:33.296636587 +0000 UTC m=+156.812971886 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:32 crc kubenswrapper[4667]: I0131 03:50:32.897831 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:32 crc kubenswrapper[4667]: E0131 03:50:32.902493 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:33.4024794 +0000 UTC m=+156.918814699 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:32.999479 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:33 crc kubenswrapper[4667]: E0131 03:50:32.999717 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:33.499698609 +0000 UTC m=+157.016033908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.000115 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:33 crc kubenswrapper[4667]: E0131 03:50:33.000573 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:33.500563122 +0000 UTC m=+157.016898501 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.100678 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:33 crc kubenswrapper[4667]: E0131 03:50:33.101070 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:33.601057376 +0000 UTC m=+157.117392675 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.155059 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7txvq" event={"ID":"cbf59768-adfb-48f6-b68b-ebf1675f1807","Type":"ContainerStarted","Data":"2027917985fd22ac934a79f133c32b36220cc70f4d68aa0678114507db8e0126"} Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.170328 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-xf9cn" event={"ID":"5608a012-09d1-4c57-9371-715625086e4d","Type":"ContainerStarted","Data":"54fc0dfb5f9657c4778908fca6df0da4407bbc00485b2822d2f9d5ff1f364b89"} Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.183695 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mt595" event={"ID":"402d584b-6176-4cee-8e27-cc233b48feec","Type":"ContainerStarted","Data":"31e22e144cebc55b12ae34e5ecb476293be69c98293bcb4b0cb4e61f0ba2831a"} Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.198991 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-xzsnp" event={"ID":"f18d0f3d-32c1-40d3-99da-969208958cf4","Type":"ContainerStarted","Data":"c6753e8949b31b66823895722aa68e6f89119ed5bedb8a284b0227c9bd0d9133"} Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.202773 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:33 crc kubenswrapper[4667]: E0131 03:50:33.204380 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:33.703805688 +0000 UTC m=+157.220140987 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.205828 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-kfr9j" event={"ID":"d44cf093-fd97-4adf-bdad-2c3fdb4157d7","Type":"ContainerStarted","Data":"7008aa3e557b4463e3e7412049556e58d3498355ad93bfcba8c95c84141ceebf"} Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.209313 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-67p2b" event={"ID":"56542c6f-259e-42a8-b62a-ea0ac38af319","Type":"ContainerStarted","Data":"b7801f4a17fb4e076fcec37b0c4e14e5e24da8bd5c747d9ef3f6ca886e11201d"} Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.218042 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lcnlf" event={"ID":"edfe2658-7a40-43db-a17b-72d1ea1fde3d","Type":"ContainerStarted","Data":"9436e64c18745dee4de0cc206209a98ee91b32fbfa8a5389773b48a85baf85e4"} Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.221038 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-7pbrg" podStartSLOduration=130.221020466 podStartE2EDuration="2m10.221020466s" podCreationTimestamp="2026-01-31 03:48:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:32.843228579 +0000 UTC m=+156.359563878" watchObservedRunningTime="2026-01-31 03:50:33.221020466 +0000 UTC m=+156.737355765" Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.222117 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-xzsnp" podStartSLOduration=130.222110854 podStartE2EDuration="2m10.222110854s" podCreationTimestamp="2026-01-31 03:48:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:33.220772129 +0000 UTC m=+156.737107428" watchObservedRunningTime="2026-01-31 03:50:33.222110854 +0000 UTC m=+156.738446153" Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.226277 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rs9cq" event={"ID":"d5757a8b-5d94-46d5-bc18-4a6c757a9ff2","Type":"ContainerStarted","Data":"62ea46ea183ae023df36b6aa76e3401dde8f06f916b8912c3a0f262449a0e3ab"} Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.226320 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rs9cq" event={"ID":"d5757a8b-5d94-46d5-bc18-4a6c757a9ff2","Type":"ContainerStarted","Data":"ae6127f71dcfd8179b736c82510019330d35cc4d354b100163634174598d661e"} Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.229776 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzjpv" event={"ID":"9b119a43-b446-4226-9490-a7ba5baf2815","Type":"ContainerStarted","Data":"04bff131bc731d93de7aab1275d38f0cece8ff37e5ac57af23f890ccbdf8fb85"} Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.239092 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s9tvt" event={"ID":"0c33fc16-1215-438a-93e6-840ca5444e75","Type":"ContainerStarted","Data":"603c1a0f9b818321915df96f5a1c4c557aaf169b4f63c8f41c39cf5a6254e2fe"} Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.245077 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vsb97" event={"ID":"9ce6e7c4-30b6-4812-8b50-bf81a13f7b9d","Type":"ContainerStarted","Data":"d9bf452344525810af1288888cdc9492a4acc58d878a396b73f0f09e13b8d7c4"} Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.245128 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vsb97" event={"ID":"9ce6e7c4-30b6-4812-8b50-bf81a13f7b9d","Type":"ContainerStarted","Data":"88c0b7968e3792c294b10086e927f2fbc16cfe7c2485bd1fcd2c460f31d27465"} Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.248135 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-97sgp" event={"ID":"031121b1-b221-434c-91c0-d9b433cd6e7c","Type":"ContainerStarted","Data":"d22e4f051770ee19702876520b259217450863e33dfff193dfe94128b0885c6c"} Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.248935 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-82j72" event={"ID":"3dc6c6a9-3322-42db-a408-9a03c18a7531","Type":"ContainerStarted","Data":"bd4330b4a76065abc64d2c3330a2d1baa1068d3997d4946de06d15c5996729e8"} Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.273064 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-dndtw" event={"ID":"18e6f9b3-4be1-4a96-9a05-f42b40f4c2fe","Type":"ContainerStarted","Data":"8fc552d673a69bc4657fbef9aa81bc3c93c38b14bce4181b4d785b15f08839ec"} Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.278027 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-czqrm" event={"ID":"5236f7ce-22c8-4283-9046-72fd92d2b7a7","Type":"ContainerStarted","Data":"b235ae38e0c3e5f6c1e47fce19739bb8ce3cec362324b3dfc53e38c750e323d8"} Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.283596 4667 patch_prober.go:28] interesting pod/router-default-5444994796-kvgs8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 03:50:33 crc kubenswrapper[4667]: [-]has-synced failed: reason withheld Jan 31 03:50:33 crc kubenswrapper[4667]: [+]process-running ok Jan 31 03:50:33 crc kubenswrapper[4667]: healthz check failed Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.283668 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kvgs8" podUID="ed8b36d0-771d-48bb-9393-db864ff8ff84" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.304368 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:33 crc kubenswrapper[4667]: E0131 03:50:33.305660 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:33.805643937 +0000 UTC m=+157.321979236 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.311723 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-xhcb5" event={"ID":"3a49a8a9-82b6-4374-a43a-224f2f9e14a4","Type":"ContainerStarted","Data":"1204bcdec9e9e39a14e0acfe21f2d66e51c4646dbc46ed335f874f3e72b2564b"} Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.326665 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j2rkz" event={"ID":"9356361a-5000-468f-bb1d-17460cd2e9dc","Type":"ContainerStarted","Data":"5a3c1d39e68b7c2936cce5dc2d52858b0a77d0c66189b9790e1fc9954b2c7c20"} Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.326734 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j2rkz" event={"ID":"9356361a-5000-468f-bb1d-17460cd2e9dc","Type":"ContainerStarted","Data":"e0accbfce00852e72383c52f80abd7368ff6c7dea449f7803ca5ca6e76db51b3"} Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.343316 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nrjrr" event={"ID":"bc826929-768c-408f-a0f9-74bd29154340","Type":"ContainerStarted","Data":"d2cc144037f3735a29af8e3754e71dba79bb283d0dd4bfee345e027065ca39fe"} Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.343359 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nrjrr" event={"ID":"bc826929-768c-408f-a0f9-74bd29154340","Type":"ContainerStarted","Data":"30431e8b6d31359ee6b66edabf9e4a06b5d9f2d43b57007c32c6cd8f544e6d3b"} Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.344632 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nrjrr" Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.351498 4667 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-nrjrr container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:8443/healthz\": dial tcp 10.217.0.43:8443: connect: connection refused" start-of-body= Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.351962 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nrjrr" podUID="bc826929-768c-408f-a0f9-74bd29154340" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.43:8443/healthz\": dial tcp 10.217.0.43:8443: connect: connection refused" Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.357815 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9pdkp" event={"ID":"889f5458-9c03-4be0-8f99-848f68c3ecc8","Type":"ContainerStarted","Data":"92df1e2da50f5371bf6b4ee3124619b7f6a67333fe24c71d88c225595bd3d2f7"} Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.357983 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9pdkp" event={"ID":"889f5458-9c03-4be0-8f99-848f68c3ecc8","Type":"ContainerStarted","Data":"f8cd771c7fdd29d2f2f7490fae2f518ad1327d40e347eb32ae71a8419576a417"} Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.358625 4667 patch_prober.go:28] interesting pod/console-operator-58897d9998-nnvtr container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/readyz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.358664 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-nnvtr" podUID="86d8d0d4-69ef-439d-b516-01b8d02cf5ce" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/readyz\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.359686 4667 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7pbrg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.359704 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7pbrg" podUID="d426c096-b6d9-4696-8066-2b9ec75356af" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.375543 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-4rfkg" Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.405373 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:33 crc kubenswrapper[4667]: E0131 03:50:33.405658 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:33.905646598 +0000 UTC m=+157.421981897 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.462710 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nrjrr" podStartSLOduration=130.462689652 podStartE2EDuration="2m10.462689652s" podCreationTimestamp="2026-01-31 03:48:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:33.397322652 +0000 UTC m=+156.913657951" watchObservedRunningTime="2026-01-31 03:50:33.462689652 +0000 UTC m=+156.979024951" Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.506985 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:33 crc kubenswrapper[4667]: E0131 03:50:33.508476 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:34.008459482 +0000 UTC m=+157.524794781 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.546939 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9pdkp" podStartSLOduration=130.546923093 podStartE2EDuration="2m10.546923093s" podCreationTimestamp="2026-01-31 03:48:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:33.542312773 +0000 UTC m=+157.058648072" watchObservedRunningTime="2026-01-31 03:50:33.546923093 +0000 UTC m=+157.063258392" Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.614457 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:33 crc kubenswrapper[4667]: E0131 03:50:33.614738 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:34.114726527 +0000 UTC m=+157.631061826 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.717128 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:33 crc kubenswrapper[4667]: E0131 03:50:33.717719 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:34.217703935 +0000 UTC m=+157.734039234 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.823140 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:33 crc kubenswrapper[4667]: E0131 03:50:33.823197 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:34.323183549 +0000 UTC m=+157.839518848 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:33 crc kubenswrapper[4667]: I0131 03:50:33.924325 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:33 crc kubenswrapper[4667]: E0131 03:50:33.924614 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:34.424599517 +0000 UTC m=+157.940934816 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.025193 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:34 crc kubenswrapper[4667]: E0131 03:50:34.025715 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:34.525685426 +0000 UTC m=+158.042020875 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.127104 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:34 crc kubenswrapper[4667]: E0131 03:50:34.127413 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:34.627397752 +0000 UTC m=+158.143733051 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.228802 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:34 crc kubenswrapper[4667]: E0131 03:50:34.229245 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:34.72923446 +0000 UTC m=+158.245569759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.272115 4667 patch_prober.go:28] interesting pod/router-default-5444994796-kvgs8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 03:50:34 crc kubenswrapper[4667]: [-]has-synced failed: reason withheld Jan 31 03:50:34 crc kubenswrapper[4667]: [+]process-running ok Jan 31 03:50:34 crc kubenswrapper[4667]: healthz check failed Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.272168 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kvgs8" podUID="ed8b36d0-771d-48bb-9393-db864ff8ff84" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.329663 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:34 crc kubenswrapper[4667]: E0131 03:50:34.329822 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:34.829792546 +0000 UTC m=+158.346127845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.329901 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:34 crc kubenswrapper[4667]: E0131 03:50:34.330192 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:34.830184556 +0000 UTC m=+158.346519855 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.361655 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s9tvt" event={"ID":"0c33fc16-1215-438a-93e6-840ca5444e75","Type":"ContainerStarted","Data":"debe78148fe8f86173308b42d9b32bc38247856a14dcc183ce9bac127e626045"} Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.362756 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-xhcb5" event={"ID":"3a49a8a9-82b6-4374-a43a-224f2f9e14a4","Type":"ContainerStarted","Data":"21cfb32470179093ddd52420305ef4293012ef1b0930434be7c3d3a7d82182bd"} Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.363830 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j2rkz" event={"ID":"9356361a-5000-468f-bb1d-17460cd2e9dc","Type":"ContainerStarted","Data":"d363dcd591107e77466606014ef6ad022995ff849e583ba4650bc98df5ff1837"} Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.365494 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lcnlf" event={"ID":"edfe2658-7a40-43db-a17b-72d1ea1fde3d","Type":"ContainerStarted","Data":"c6e84a37c50fe4cc507a0b8ee22f2248ab8bde7ce249d8ee635b532d2038690c"} Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.366613 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mt595" event={"ID":"402d584b-6176-4cee-8e27-cc233b48feec","Type":"ContainerStarted","Data":"c95d7e2ba1a4a4372c706d4ca2f16046c575e316043cbedc9ffe1e7d0f5e4917"} Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.368905 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-czqrm" event={"ID":"5236f7ce-22c8-4283-9046-72fd92d2b7a7","Type":"ContainerStarted","Data":"50b38bfb71acce8990c0ce84ec614454a4bf2f511fd0eac2ee2208185df77f95"} Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.368928 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-czqrm" event={"ID":"5236f7ce-22c8-4283-9046-72fd92d2b7a7","Type":"ContainerStarted","Data":"f94b25b003f0541488eea69bb8127f90d925c6a6bdb894f3ed20e9803d90f5d6"} Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.370749 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rs9cq" event={"ID":"d5757a8b-5d94-46d5-bc18-4a6c757a9ff2","Type":"ContainerStarted","Data":"c23c1eb9827195f48d3d0937ee9601d03192ac8659b0ed6a63ed01b3fb2f101b"} Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.372480 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vsb97" event={"ID":"9ce6e7c4-30b6-4812-8b50-bf81a13f7b9d","Type":"ContainerStarted","Data":"3510a6d2bfb48744d579f022af2700bffc01780a19791c4437529a7e053b9d0b"} Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.372807 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vsb97" Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.374142 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-97sgp" event={"ID":"031121b1-b221-434c-91c0-d9b433cd6e7c","Type":"ContainerStarted","Data":"cde9d4436cd910cc0634eb80f82a7e326a01343a146ac0b5c4e477239ebb1a07"} Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.374701 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-97sgp" Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.375915 4667 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-97sgp container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.375948 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-97sgp" podUID="031121b1-b221-434c-91c0-d9b433cd6e7c" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.376192 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-kfr9j" event={"ID":"d44cf093-fd97-4adf-bdad-2c3fdb4157d7","Type":"ContainerStarted","Data":"5071d0e67b5cc8b47dbb4d78e509bf887d83638d096e07ced681c601327f1bb7"} Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.376603 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-kfr9j" Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.378432 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-67p2b" event={"ID":"56542c6f-259e-42a8-b62a-ea0ac38af319","Type":"ContainerStarted","Data":"a79cbe1c0d5fdcab3ae457bb60a12c3e764b211507bd7aeeff81b63b905de0be"} Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.379062 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-67p2b" Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.380319 4667 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-67p2b container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:5443/healthz\": dial tcp 10.217.0.20:5443: connect: connection refused" start-of-body= Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.380351 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-67p2b" podUID="56542c6f-259e-42a8-b62a-ea0ac38af319" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.20:5443/healthz\": dial tcp 10.217.0.20:5443: connect: connection refused" Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.382764 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzjpv" event={"ID":"9b119a43-b446-4226-9490-a7ba5baf2815","Type":"ContainerStarted","Data":"fe723b9f536f83879017e7f17da6f40461b34c48812805c1b893ebc015f28d2e"} Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.384519 4667 generic.go:334] "Generic (PLEG): container finished" podID="cbf59768-adfb-48f6-b68b-ebf1675f1807" containerID="6c5eb7e3022b734ad3f4b38859245036ab4d1de7adab44cfc8aab8f8dbc134bd" exitCode=0 Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.384566 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7txvq" event={"ID":"cbf59768-adfb-48f6-b68b-ebf1675f1807","Type":"ContainerDied","Data":"6c5eb7e3022b734ad3f4b38859245036ab4d1de7adab44cfc8aab8f8dbc134bd"} Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.386090 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-xf9cn" event={"ID":"5608a012-09d1-4c57-9371-715625086e4d","Type":"ContainerStarted","Data":"7fde825eca1fadff7b8f391ffe014b2667061301a4fa69acb2713f2aea041b26"} Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.389882 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-dndtw" event={"ID":"18e6f9b3-4be1-4a96-9a05-f42b40f4c2fe","Type":"ContainerStarted","Data":"d2b2ad105ae4d6892e4667c84c0182cf217485e5d6148d45375e5df34d30ed75"} Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.391353 4667 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-nrjrr container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:8443/healthz\": dial tcp 10.217.0.43:8443: connect: connection refused" start-of-body= Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.391387 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nrjrr" podUID="bc826929-768c-408f-a0f9-74bd29154340" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.43:8443/healthz\": dial tcp 10.217.0.43:8443: connect: connection refused" Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.392996 4667 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-ms8lf container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.393024 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ms8lf" podUID="9af91113-a315-4416-a1f2-6566c16278cf" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.435255 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:34 crc kubenswrapper[4667]: E0131 03:50:34.435572 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:34.935559447 +0000 UTC m=+158.451894746 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.470600 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s9tvt" podStartSLOduration=131.470581478 podStartE2EDuration="2m11.470581478s" podCreationTimestamp="2026-01-31 03:48:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:34.402341993 +0000 UTC m=+157.918677292" watchObservedRunningTime="2026-01-31 03:50:34.470581478 +0000 UTC m=+157.986916777" Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.497780 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mt595" podStartSLOduration=132.497764584 podStartE2EDuration="2m12.497764584s" podCreationTimestamp="2026-01-31 03:48:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:34.49720526 +0000 UTC m=+158.013540559" watchObservedRunningTime="2026-01-31 03:50:34.497764584 +0000 UTC m=+158.014099883" Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.498767 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-67p2b" podStartSLOduration=131.49876157 podStartE2EDuration="2m11.49876157s" podCreationTimestamp="2026-01-31 03:48:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:34.471901242 +0000 UTC m=+157.988236541" watchObservedRunningTime="2026-01-31 03:50:34.49876157 +0000 UTC m=+158.015096869" Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.540431 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-97sgp" podStartSLOduration=131.540415684 podStartE2EDuration="2m11.540415684s" podCreationTimestamp="2026-01-31 03:48:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:34.532947889 +0000 UTC m=+158.049283188" watchObservedRunningTime="2026-01-31 03:50:34.540415684 +0000 UTC m=+158.056750983" Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.544376 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:34 crc kubenswrapper[4667]: E0131 03:50:34.544620 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:35.044611123 +0000 UTC m=+158.560946422 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.567283 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rs9cq" podStartSLOduration=131.567265832 podStartE2EDuration="2m11.567265832s" podCreationTimestamp="2026-01-31 03:48:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:34.566732928 +0000 UTC m=+158.083068227" watchObservedRunningTime="2026-01-31 03:50:34.567265832 +0000 UTC m=+158.083601131" Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.606074 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-czqrm" podStartSLOduration=131.606057541 podStartE2EDuration="2m11.606057541s" podCreationTimestamp="2026-01-31 03:48:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:34.603956246 +0000 UTC m=+158.120291545" watchObservedRunningTime="2026-01-31 03:50:34.606057541 +0000 UTC m=+158.122392840" Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.645261 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:34 crc kubenswrapper[4667]: E0131 03:50:34.645595 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:35.145581409 +0000 UTC m=+158.661916708 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.659351 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-dndtw" podStartSLOduration=131.659326166 podStartE2EDuration="2m11.659326166s" podCreationTimestamp="2026-01-31 03:48:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:34.658581807 +0000 UTC m=+158.174917106" watchObservedRunningTime="2026-01-31 03:50:34.659326166 +0000 UTC m=+158.175661465" Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.659701 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lcnlf" podStartSLOduration=131.659696216 podStartE2EDuration="2m11.659696216s" podCreationTimestamp="2026-01-31 03:48:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:34.635991899 +0000 UTC m=+158.152327198" watchObservedRunningTime="2026-01-31 03:50:34.659696216 +0000 UTC m=+158.176031515" Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.728919 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzjpv" podStartSLOduration=131.728904496 podStartE2EDuration="2m11.728904496s" podCreationTimestamp="2026-01-31 03:48:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:34.720861167 +0000 UTC m=+158.237196466" watchObservedRunningTime="2026-01-31 03:50:34.728904496 +0000 UTC m=+158.245239795" Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.746304 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:34 crc kubenswrapper[4667]: E0131 03:50:34.746595 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:35.246584856 +0000 UTC m=+158.762920155 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.784620 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-j2rkz" podStartSLOduration=131.784603075 podStartE2EDuration="2m11.784603075s" podCreationTimestamp="2026-01-31 03:48:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:34.781971076 +0000 UTC m=+158.298306385" watchObservedRunningTime="2026-01-31 03:50:34.784603075 +0000 UTC m=+158.300938374" Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.824657 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vsb97" podStartSLOduration=131.824640846 podStartE2EDuration="2m11.824640846s" podCreationTimestamp="2026-01-31 03:48:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:34.821969547 +0000 UTC m=+158.338304846" watchObservedRunningTime="2026-01-31 03:50:34.824640846 +0000 UTC m=+158.340976145" Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.847045 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:34 crc kubenswrapper[4667]: E0131 03:50:34.847368 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:35.347350487 +0000 UTC m=+158.863685786 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.957371 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:34 crc kubenswrapper[4667]: E0131 03:50:34.957639 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:35.457628715 +0000 UTC m=+158.973964014 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:34 crc kubenswrapper[4667]: I0131 03:50:34.959299 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-xhcb5" podStartSLOduration=132.959289059 podStartE2EDuration="2m12.959289059s" podCreationTimestamp="2026-01-31 03:48:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:34.932000109 +0000 UTC m=+158.448335408" watchObservedRunningTime="2026-01-31 03:50:34.959289059 +0000 UTC m=+158.475624358" Jan 31 03:50:35 crc kubenswrapper[4667]: I0131 03:50:35.011773 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-xf9cn" podStartSLOduration=10.011759683 podStartE2EDuration="10.011759683s" podCreationTimestamp="2026-01-31 03:50:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:35.009728111 +0000 UTC m=+158.526063410" watchObservedRunningTime="2026-01-31 03:50:35.011759683 +0000 UTC m=+158.528094982" Jan 31 03:50:35 crc kubenswrapper[4667]: I0131 03:50:35.067106 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:35 crc kubenswrapper[4667]: E0131 03:50:35.067705 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:35.567687848 +0000 UTC m=+159.084023147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:35 crc kubenswrapper[4667]: I0131 03:50:35.085001 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-kfr9j" podStartSLOduration=10.08465749 podStartE2EDuration="10.08465749s" podCreationTimestamp="2026-01-31 03:50:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:35.048257563 +0000 UTC m=+158.564592852" watchObservedRunningTime="2026-01-31 03:50:35.08465749 +0000 UTC m=+158.600992789" Jan 31 03:50:35 crc kubenswrapper[4667]: I0131 03:50:35.174617 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:35 crc kubenswrapper[4667]: E0131 03:50:35.174986 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:35.674973719 +0000 UTC m=+159.191309018 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:35 crc kubenswrapper[4667]: I0131 03:50:35.268500 4667 patch_prober.go:28] interesting pod/router-default-5444994796-kvgs8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 03:50:35 crc kubenswrapper[4667]: [-]has-synced failed: reason withheld Jan 31 03:50:35 crc kubenswrapper[4667]: [+]process-running ok Jan 31 03:50:35 crc kubenswrapper[4667]: healthz check failed Jan 31 03:50:35 crc kubenswrapper[4667]: I0131 03:50:35.268558 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kvgs8" podUID="ed8b36d0-771d-48bb-9393-db864ff8ff84" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 03:50:35 crc kubenswrapper[4667]: I0131 03:50:35.275722 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:35 crc kubenswrapper[4667]: E0131 03:50:35.276078 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:35.776064218 +0000 UTC m=+159.292399507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:35 crc kubenswrapper[4667]: I0131 03:50:35.377823 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:35 crc kubenswrapper[4667]: E0131 03:50:35.378283 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:35.878267516 +0000 UTC m=+159.394602815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:35 crc kubenswrapper[4667]: I0131 03:50:35.399696 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7txvq" event={"ID":"cbf59768-adfb-48f6-b68b-ebf1675f1807","Type":"ContainerStarted","Data":"a6debbfb4520a6c51c72ec7beb514478082f81c84361563232991a7b66a5fe18"} Jan 31 03:50:35 crc kubenswrapper[4667]: I0131 03:50:35.399745 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7txvq" event={"ID":"cbf59768-adfb-48f6-b68b-ebf1675f1807","Type":"ContainerStarted","Data":"a50dbdc995b846e0375b91acd3e03d53b67c7d7e152c4defba16a14a36fb2d86"} Jan 31 03:50:35 crc kubenswrapper[4667]: I0131 03:50:35.401940 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-82j72" event={"ID":"3dc6c6a9-3322-42db-a408-9a03c18a7531","Type":"ContainerStarted","Data":"90ab4852e6c1d8e0798a15a2e1083cb3ff973cf1c203acb92a35f591232179ac"} Jan 31 03:50:35 crc kubenswrapper[4667]: I0131 03:50:35.405007 4667 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-97sgp container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 31 03:50:35 crc kubenswrapper[4667]: I0131 03:50:35.405043 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-97sgp" podUID="031121b1-b221-434c-91c0-d9b433cd6e7c" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 31 03:50:35 crc kubenswrapper[4667]: I0131 03:50:35.405621 4667 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-67p2b container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:5443/healthz\": dial tcp 10.217.0.20:5443: connect: connection refused" start-of-body= Jan 31 03:50:35 crc kubenswrapper[4667]: I0131 03:50:35.405664 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-67p2b" podUID="56542c6f-259e-42a8-b62a-ea0ac38af319" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.20:5443/healthz\": dial tcp 10.217.0.20:5443: connect: connection refused" Jan 31 03:50:35 crc kubenswrapper[4667]: I0131 03:50:35.422647 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nrjrr" Jan 31 03:50:35 crc kubenswrapper[4667]: I0131 03:50:35.474786 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-7txvq" podStartSLOduration=133.474772527 podStartE2EDuration="2m13.474772527s" podCreationTimestamp="2026-01-31 03:48:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:35.444036257 +0000 UTC m=+158.960371556" watchObservedRunningTime="2026-01-31 03:50:35.474772527 +0000 UTC m=+158.991107826" Jan 31 03:50:35 crc kubenswrapper[4667]: I0131 03:50:35.479363 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:35 crc kubenswrapper[4667]: E0131 03:50:35.479514 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:35.97949718 +0000 UTC m=+159.495832469 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:35 crc kubenswrapper[4667]: I0131 03:50:35.479569 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:35 crc kubenswrapper[4667]: E0131 03:50:35.479965 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:35.979923421 +0000 UTC m=+159.496258720 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:35 crc kubenswrapper[4667]: I0131 03:50:35.580611 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:35 crc kubenswrapper[4667]: E0131 03:50:35.582247 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:36.082232532 +0000 UTC m=+159.598567831 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:35 crc kubenswrapper[4667]: I0131 03:50:35.616194 4667 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-ms8lf container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 03:50:35 crc kubenswrapper[4667]: I0131 03:50:35.616246 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ms8lf" podUID="9af91113-a315-4416-a1f2-6566c16278cf" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 03:50:35 crc kubenswrapper[4667]: I0131 03:50:35.682343 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:35 crc kubenswrapper[4667]: E0131 03:50:35.682729 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:36.182717435 +0000 UTC m=+159.699052724 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:35 crc kubenswrapper[4667]: I0131 03:50:35.783378 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:35 crc kubenswrapper[4667]: E0131 03:50:35.783639 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:36.28362498 +0000 UTC m=+159.799960279 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:35 crc kubenswrapper[4667]: I0131 03:50:35.884872 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:35 crc kubenswrapper[4667]: E0131 03:50:35.885143 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:36.38513173 +0000 UTC m=+159.901467029 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:35 crc kubenswrapper[4667]: I0131 03:50:35.986104 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:35 crc kubenswrapper[4667]: E0131 03:50:35.986404 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:36.486389694 +0000 UTC m=+160.002724993 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:36 crc kubenswrapper[4667]: I0131 03:50:36.087238 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:36 crc kubenswrapper[4667]: E0131 03:50:36.087557 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:36.587546745 +0000 UTC m=+160.103882044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:36 crc kubenswrapper[4667]: I0131 03:50:36.188634 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:36 crc kubenswrapper[4667]: E0131 03:50:36.188744 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:36.688728857 +0000 UTC m=+160.205064156 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:36 crc kubenswrapper[4667]: I0131 03:50:36.188939 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:36 crc kubenswrapper[4667]: E0131 03:50:36.189203 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:36.689195359 +0000 UTC m=+160.205530658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:36 crc kubenswrapper[4667]: I0131 03:50:36.269631 4667 patch_prober.go:28] interesting pod/router-default-5444994796-kvgs8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 03:50:36 crc kubenswrapper[4667]: [-]has-synced failed: reason withheld Jan 31 03:50:36 crc kubenswrapper[4667]: [+]process-running ok Jan 31 03:50:36 crc kubenswrapper[4667]: healthz check failed Jan 31 03:50:36 crc kubenswrapper[4667]: I0131 03:50:36.269681 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kvgs8" podUID="ed8b36d0-771d-48bb-9393-db864ff8ff84" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 03:50:36 crc kubenswrapper[4667]: I0131 03:50:36.290158 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:36 crc kubenswrapper[4667]: E0131 03:50:36.290333 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:36.790310799 +0000 UTC m=+160.306646098 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:36 crc kubenswrapper[4667]: I0131 03:50:36.290412 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:36 crc kubenswrapper[4667]: E0131 03:50:36.290666 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:36.790654298 +0000 UTC m=+160.306989597 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:36 crc kubenswrapper[4667]: I0131 03:50:36.391920 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:36 crc kubenswrapper[4667]: E0131 03:50:36.392233 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:36.89221892 +0000 UTC m=+160.408554219 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:36 crc kubenswrapper[4667]: I0131 03:50:36.415894 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-82j72" event={"ID":"3dc6c6a9-3322-42db-a408-9a03c18a7531","Type":"ContainerStarted","Data":"7f1eae0726ed5ffc9a38f40140f4782f5789c946d0f7917a56492a431ebeb9ca"} Jan 31 03:50:36 crc kubenswrapper[4667]: I0131 03:50:36.434421 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-97sgp" Jan 31 03:50:36 crc kubenswrapper[4667]: I0131 03:50:36.493223 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:36 crc kubenswrapper[4667]: E0131 03:50:36.493549 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:36.993537605 +0000 UTC m=+160.509872904 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:36 crc kubenswrapper[4667]: E0131 03:50:36.594756 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:37.094736888 +0000 UTC m=+160.611072187 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:36 crc kubenswrapper[4667]: I0131 03:50:36.594789 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:36 crc kubenswrapper[4667]: I0131 03:50:36.595242 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:36 crc kubenswrapper[4667]: E0131 03:50:36.599625 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:37.099615595 +0000 UTC m=+160.615950894 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:36 crc kubenswrapper[4667]: I0131 03:50:36.696366 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:36 crc kubenswrapper[4667]: E0131 03:50:36.696469 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:37.196453783 +0000 UTC m=+160.712789082 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:36 crc kubenswrapper[4667]: I0131 03:50:36.696756 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:36 crc kubenswrapper[4667]: E0131 03:50:36.697031 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:37.197025138 +0000 UTC m=+160.713360437 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:36 crc kubenswrapper[4667]: I0131 03:50:36.798186 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:36 crc kubenswrapper[4667]: E0131 03:50:36.798511 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:37.298498768 +0000 UTC m=+160.814834067 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:36 crc kubenswrapper[4667]: I0131 03:50:36.899795 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:36 crc kubenswrapper[4667]: E0131 03:50:36.900107 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:37.40009621 +0000 UTC m=+160.916431509 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:36 crc kubenswrapper[4667]: I0131 03:50:36.905035 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4lz7l"] Jan 31 03:50:36 crc kubenswrapper[4667]: I0131 03:50:36.905970 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4lz7l" Jan 31 03:50:36 crc kubenswrapper[4667]: I0131 03:50:36.908947 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 31 03:50:36 crc kubenswrapper[4667]: I0131 03:50:36.935746 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4lz7l"] Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.001460 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.001691 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6-catalog-content\") pod \"community-operators-4lz7l\" (UID: \"7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6\") " pod="openshift-marketplace/community-operators-4lz7l" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.001765 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcx9j\" (UniqueName: \"kubernetes.io/projected/7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6-kube-api-access-dcx9j\") pod \"community-operators-4lz7l\" (UID: \"7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6\") " pod="openshift-marketplace/community-operators-4lz7l" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.001795 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6-utilities\") pod \"community-operators-4lz7l\" (UID: \"7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6\") " pod="openshift-marketplace/community-operators-4lz7l" Jan 31 03:50:37 crc kubenswrapper[4667]: E0131 03:50:37.001982 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:37.50195018 +0000 UTC m=+161.018285479 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.089177 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lh6xn"] Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.090136 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lh6xn" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.094456 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.103407 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lh6xn"] Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.105181 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcx9j\" (UniqueName: \"kubernetes.io/projected/7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6-kube-api-access-dcx9j\") pod \"community-operators-4lz7l\" (UID: \"7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6\") " pod="openshift-marketplace/community-operators-4lz7l" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.105239 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.105264 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6-utilities\") pod \"community-operators-4lz7l\" (UID: \"7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6\") " pod="openshift-marketplace/community-operators-4lz7l" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.105406 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6-catalog-content\") pod \"community-operators-4lz7l\" (UID: \"7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6\") " pod="openshift-marketplace/community-operators-4lz7l" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.105918 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6-catalog-content\") pod \"community-operators-4lz7l\" (UID: \"7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6\") " pod="openshift-marketplace/community-operators-4lz7l" Jan 31 03:50:37 crc kubenswrapper[4667]: E0131 03:50:37.105914 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:37.605809861 +0000 UTC m=+161.122145340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.105933 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6-utilities\") pod \"community-operators-4lz7l\" (UID: \"7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6\") " pod="openshift-marketplace/community-operators-4lz7l" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.132903 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcx9j\" (UniqueName: \"kubernetes.io/projected/7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6-kube-api-access-dcx9j\") pod \"community-operators-4lz7l\" (UID: \"7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6\") " pod="openshift-marketplace/community-operators-4lz7l" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.206336 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:37 crc kubenswrapper[4667]: E0131 03:50:37.206606 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:37.706562092 +0000 UTC m=+161.222897391 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.206685 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fc82b44-ef8d-4f7c-a022-fcbed68b1fab-catalog-content\") pod \"certified-operators-lh6xn\" (UID: \"6fc82b44-ef8d-4f7c-a022-fcbed68b1fab\") " pod="openshift-marketplace/certified-operators-lh6xn" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.206794 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzjr5\" (UniqueName: \"kubernetes.io/projected/6fc82b44-ef8d-4f7c-a022-fcbed68b1fab-kube-api-access-xzjr5\") pod \"certified-operators-lh6xn\" (UID: \"6fc82b44-ef8d-4f7c-a022-fcbed68b1fab\") " pod="openshift-marketplace/certified-operators-lh6xn" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.207084 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fc82b44-ef8d-4f7c-a022-fcbed68b1fab-utilities\") pod \"certified-operators-lh6xn\" (UID: \"6fc82b44-ef8d-4f7c-a022-fcbed68b1fab\") " pod="openshift-marketplace/certified-operators-lh6xn" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.229475 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4lz7l" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.260686 4667 patch_prober.go:28] interesting pod/downloads-7954f5f757-5zj2q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" start-of-body= Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.260736 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5zj2q" podUID="745b1e30-1f16-4539-847b-88db36eb6d4b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.264409 4667 patch_prober.go:28] interesting pod/downloads-7954f5f757-5zj2q container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" start-of-body= Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.264492 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-5zj2q" podUID="745b1e30-1f16-4539-847b-88db36eb6d4b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.267937 4667 patch_prober.go:28] interesting pod/router-default-5444994796-kvgs8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 03:50:37 crc kubenswrapper[4667]: [-]has-synced failed: reason withheld Jan 31 03:50:37 crc kubenswrapper[4667]: [+]process-running ok Jan 31 03:50:37 crc kubenswrapper[4667]: healthz check failed Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.268014 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kvgs8" podUID="ed8b36d0-771d-48bb-9393-db864ff8ff84" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.308326 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.308392 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fc82b44-ef8d-4f7c-a022-fcbed68b1fab-catalog-content\") pod \"certified-operators-lh6xn\" (UID: \"6fc82b44-ef8d-4f7c-a022-fcbed68b1fab\") " pod="openshift-marketplace/certified-operators-lh6xn" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.308421 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzjr5\" (UniqueName: \"kubernetes.io/projected/6fc82b44-ef8d-4f7c-a022-fcbed68b1fab-kube-api-access-xzjr5\") pod \"certified-operators-lh6xn\" (UID: \"6fc82b44-ef8d-4f7c-a022-fcbed68b1fab\") " pod="openshift-marketplace/certified-operators-lh6xn" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.308482 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fc82b44-ef8d-4f7c-a022-fcbed68b1fab-utilities\") pod \"certified-operators-lh6xn\" (UID: \"6fc82b44-ef8d-4f7c-a022-fcbed68b1fab\") " pod="openshift-marketplace/certified-operators-lh6xn" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.308969 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fc82b44-ef8d-4f7c-a022-fcbed68b1fab-utilities\") pod \"certified-operators-lh6xn\" (UID: \"6fc82b44-ef8d-4f7c-a022-fcbed68b1fab\") " pod="openshift-marketplace/certified-operators-lh6xn" Jan 31 03:50:37 crc kubenswrapper[4667]: E0131 03:50:37.309337 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:37.809320404 +0000 UTC m=+161.325655703 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.309606 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fc82b44-ef8d-4f7c-a022-fcbed68b1fab-catalog-content\") pod \"certified-operators-lh6xn\" (UID: \"6fc82b44-ef8d-4f7c-a022-fcbed68b1fab\") " pod="openshift-marketplace/certified-operators-lh6xn" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.309946 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-btrmv"] Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.311196 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-btrmv" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.322099 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-btrmv"] Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.384873 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzjr5\" (UniqueName: \"kubernetes.io/projected/6fc82b44-ef8d-4f7c-a022-fcbed68b1fab-kube-api-access-xzjr5\") pod \"certified-operators-lh6xn\" (UID: \"6fc82b44-ef8d-4f7c-a022-fcbed68b1fab\") " pod="openshift-marketplace/certified-operators-lh6xn" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.409409 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.409580 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2720ee62-278e-4d32-924f-aa7401d1e7cb-utilities\") pod \"community-operators-btrmv\" (UID: \"2720ee62-278e-4d32-924f-aa7401d1e7cb\") " pod="openshift-marketplace/community-operators-btrmv" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.409668 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvkr9\" (UniqueName: \"kubernetes.io/projected/2720ee62-278e-4d32-924f-aa7401d1e7cb-kube-api-access-vvkr9\") pod \"community-operators-btrmv\" (UID: \"2720ee62-278e-4d32-924f-aa7401d1e7cb\") " pod="openshift-marketplace/community-operators-btrmv" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.409715 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2720ee62-278e-4d32-924f-aa7401d1e7cb-catalog-content\") pod \"community-operators-btrmv\" (UID: \"2720ee62-278e-4d32-924f-aa7401d1e7cb\") " pod="openshift-marketplace/community-operators-btrmv" Jan 31 03:50:37 crc kubenswrapper[4667]: E0131 03:50:37.409821 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:37.909806738 +0000 UTC m=+161.426142037 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.410198 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lh6xn" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.416975 4667 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-67p2b container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.417020 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-67p2b" podUID="56542c6f-259e-42a8-b62a-ea0ac38af319" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.20:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.468258 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-82j72" event={"ID":"3dc6c6a9-3322-42db-a408-9a03c18a7531","Type":"ContainerStarted","Data":"54f30e36248280d1eaa809a09ed0fd75fddefa6cd5e945c8db652ad57a925e17"} Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.468337 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-82j72" event={"ID":"3dc6c6a9-3322-42db-a408-9a03c18a7531","Type":"ContainerStarted","Data":"0572ae2bb8b24fa2a4c3c9f5392dc1428cdd45c350350b8c7fb0af3031456632"} Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.498258 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-52dpw"] Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.498297 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-82j72" podStartSLOduration=12.49828006 podStartE2EDuration="12.49828006s" podCreationTimestamp="2026-01-31 03:50:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:37.494171763 +0000 UTC m=+161.010507062" watchObservedRunningTime="2026-01-31 03:50:37.49828006 +0000 UTC m=+161.014615359" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.499533 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-52dpw" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.511799 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvkr9\" (UniqueName: \"kubernetes.io/projected/2720ee62-278e-4d32-924f-aa7401d1e7cb-kube-api-access-vvkr9\") pod \"community-operators-btrmv\" (UID: \"2720ee62-278e-4d32-924f-aa7401d1e7cb\") " pod="openshift-marketplace/community-operators-btrmv" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.511862 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.511884 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2720ee62-278e-4d32-924f-aa7401d1e7cb-catalog-content\") pod \"community-operators-btrmv\" (UID: \"2720ee62-278e-4d32-924f-aa7401d1e7cb\") " pod="openshift-marketplace/community-operators-btrmv" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.511938 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2720ee62-278e-4d32-924f-aa7401d1e7cb-utilities\") pod \"community-operators-btrmv\" (UID: \"2720ee62-278e-4d32-924f-aa7401d1e7cb\") " pod="openshift-marketplace/community-operators-btrmv" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.512515 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2720ee62-278e-4d32-924f-aa7401d1e7cb-utilities\") pod \"community-operators-btrmv\" (UID: \"2720ee62-278e-4d32-924f-aa7401d1e7cb\") " pod="openshift-marketplace/community-operators-btrmv" Jan 31 03:50:37 crc kubenswrapper[4667]: E0131 03:50:37.512697 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:38.012682744 +0000 UTC m=+161.529018043 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.512821 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2720ee62-278e-4d32-924f-aa7401d1e7cb-catalog-content\") pod \"community-operators-btrmv\" (UID: \"2720ee62-278e-4d32-924f-aa7401d1e7cb\") " pod="openshift-marketplace/community-operators-btrmv" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.517326 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-52dpw"] Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.563481 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvkr9\" (UniqueName: \"kubernetes.io/projected/2720ee62-278e-4d32-924f-aa7401d1e7cb-kube-api-access-vvkr9\") pod \"community-operators-btrmv\" (UID: \"2720ee62-278e-4d32-924f-aa7401d1e7cb\") " pod="openshift-marketplace/community-operators-btrmv" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.613858 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.614621 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdvrd\" (UniqueName: \"kubernetes.io/projected/8ac842ab-5970-4e4c-81ee-2f8b40d61416-kube-api-access-zdvrd\") pod \"certified-operators-52dpw\" (UID: \"8ac842ab-5970-4e4c-81ee-2f8b40d61416\") " pod="openshift-marketplace/certified-operators-52dpw" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.614683 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ac842ab-5970-4e4c-81ee-2f8b40d61416-utilities\") pod \"certified-operators-52dpw\" (UID: \"8ac842ab-5970-4e4c-81ee-2f8b40d61416\") " pod="openshift-marketplace/certified-operators-52dpw" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.614751 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ac842ab-5970-4e4c-81ee-2f8b40d61416-catalog-content\") pod \"certified-operators-52dpw\" (UID: \"8ac842ab-5970-4e4c-81ee-2f8b40d61416\") " pod="openshift-marketplace/certified-operators-52dpw" Jan 31 03:50:37 crc kubenswrapper[4667]: E0131 03:50:37.615597 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 03:50:38.1155726 +0000 UTC m=+161.631907899 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.638165 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-btrmv" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.638630 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ms8lf" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.665442 4667 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.694588 4667 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-31T03:50:37.665482299Z","Handler":null,"Name":""} Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.719567 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdvrd\" (UniqueName: \"kubernetes.io/projected/8ac842ab-5970-4e4c-81ee-2f8b40d61416-kube-api-access-zdvrd\") pod \"certified-operators-52dpw\" (UID: \"8ac842ab-5970-4e4c-81ee-2f8b40d61416\") " pod="openshift-marketplace/certified-operators-52dpw" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.719612 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ac842ab-5970-4e4c-81ee-2f8b40d61416-utilities\") pod \"certified-operators-52dpw\" (UID: \"8ac842ab-5970-4e4c-81ee-2f8b40d61416\") " pod="openshift-marketplace/certified-operators-52dpw" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.719672 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ac842ab-5970-4e4c-81ee-2f8b40d61416-catalog-content\") pod \"certified-operators-52dpw\" (UID: \"8ac842ab-5970-4e4c-81ee-2f8b40d61416\") " pod="openshift-marketplace/certified-operators-52dpw" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.719726 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:37 crc kubenswrapper[4667]: E0131 03:50:37.720011 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 03:50:38.219999957 +0000 UTC m=+161.736335256 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w7g4m" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.720412 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ac842ab-5970-4e4c-81ee-2f8b40d61416-catalog-content\") pod \"certified-operators-52dpw\" (UID: \"8ac842ab-5970-4e4c-81ee-2f8b40d61416\") " pod="openshift-marketplace/certified-operators-52dpw" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.721057 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ac842ab-5970-4e4c-81ee-2f8b40d61416-utilities\") pod \"certified-operators-52dpw\" (UID: \"8ac842ab-5970-4e4c-81ee-2f8b40d61416\") " pod="openshift-marketplace/certified-operators-52dpw" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.765230 4667 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.765275 4667 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.776263 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdvrd\" (UniqueName: \"kubernetes.io/projected/8ac842ab-5970-4e4c-81ee-2f8b40d61416-kube-api-access-zdvrd\") pod \"certified-operators-52dpw\" (UID: \"8ac842ab-5970-4e4c-81ee-2f8b40d61416\") " pod="openshift-marketplace/certified-operators-52dpw" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.818566 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-52dpw" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.820493 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.946370 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.953422 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4lz7l"] Jan 31 03:50:37 crc kubenswrapper[4667]: I0131 03:50:37.964960 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:50:38 crc kubenswrapper[4667]: I0131 03:50:38.028074 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:38 crc kubenswrapper[4667]: I0131 03:50:38.059287 4667 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 31 03:50:38 crc kubenswrapper[4667]: I0131 03:50:38.059346 4667 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:38 crc kubenswrapper[4667]: I0131 03:50:38.230533 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w7g4m\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:38 crc kubenswrapper[4667]: I0131 03:50:38.270631 4667 patch_prober.go:28] interesting pod/router-default-5444994796-kvgs8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 03:50:38 crc kubenswrapper[4667]: [-]has-synced failed: reason withheld Jan 31 03:50:38 crc kubenswrapper[4667]: [+]process-running ok Jan 31 03:50:38 crc kubenswrapper[4667]: healthz check failed Jan 31 03:50:38 crc kubenswrapper[4667]: I0131 03:50:38.270700 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kvgs8" podUID="ed8b36d0-771d-48bb-9393-db864ff8ff84" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 03:50:38 crc kubenswrapper[4667]: I0131 03:50:38.458671 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lh6xn"] Jan 31 03:50:38 crc kubenswrapper[4667]: I0131 03:50:38.479337 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:38 crc kubenswrapper[4667]: I0131 03:50:38.490625 4667 generic.go:334] "Generic (PLEG): container finished" podID="7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6" containerID="12f75f520a303c4544c3e3e96588ce998de47980ec0e0ead1048d5f554b3eb08" exitCode=0 Jan 31 03:50:38 crc kubenswrapper[4667]: I0131 03:50:38.490807 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4lz7l" event={"ID":"7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6","Type":"ContainerDied","Data":"12f75f520a303c4544c3e3e96588ce998de47980ec0e0ead1048d5f554b3eb08"} Jan 31 03:50:38 crc kubenswrapper[4667]: I0131 03:50:38.490875 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4lz7l" event={"ID":"7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6","Type":"ContainerStarted","Data":"ac7ce3b43a6a117a4342e3655788aa6b7375e558555f753c127fd65f56f468fd"} Jan 31 03:50:38 crc kubenswrapper[4667]: I0131 03:50:38.500510 4667 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 03:50:38 crc kubenswrapper[4667]: I0131 03:50:38.534008 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-btrmv"] Jan 31 03:50:38 crc kubenswrapper[4667]: W0131 03:50:38.625971 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2720ee62_278e_4d32_924f_aa7401d1e7cb.slice/crio-60126b8ef646d6f2e3cbe9f6a98adfc0005b4ee202d439378b492421511115b3 WatchSource:0}: Error finding container 60126b8ef646d6f2e3cbe9f6a98adfc0005b4ee202d439378b492421511115b3: Status 404 returned error can't find the container with id 60126b8ef646d6f2e3cbe9f6a98adfc0005b4ee202d439378b492421511115b3 Jan 31 03:50:38 crc kubenswrapper[4667]: I0131 03:50:38.698299 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-52dpw"] Jan 31 03:50:38 crc kubenswrapper[4667]: I0131 03:50:38.846714 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-wjsth" Jan 31 03:50:38 crc kubenswrapper[4667]: I0131 03:50:38.850389 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-wjsth" Jan 31 03:50:38 crc kubenswrapper[4667]: I0131 03:50:38.856187 4667 patch_prober.go:28] interesting pod/console-f9d7485db-wjsth container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Jan 31 03:50:38 crc kubenswrapper[4667]: I0131 03:50:38.856237 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-wjsth" podUID="4f281370-6419-4dfb-b21f-9d1c9c7eddaa" containerName="console" probeResult="failure" output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" Jan 31 03:50:38 crc kubenswrapper[4667]: I0131 03:50:38.869325 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-nnvtr" Jan 31 03:50:38 crc kubenswrapper[4667]: I0131 03:50:38.924709 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:50:38 crc kubenswrapper[4667]: I0131 03:50:38.979635 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w7g4m"] Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.007983 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-7pbrg" Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.058867 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.062700 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.069627 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.070131 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.116427 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.123191 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gzpml"] Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.124613 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gzpml" Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.128608 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gzpml"] Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.129366 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.181444 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d010741-ba9c-43b5-9dc3-87cb17d353d2-catalog-content\") pod \"redhat-marketplace-gzpml\" (UID: \"6d010741-ba9c-43b5-9dc3-87cb17d353d2\") " pod="openshift-marketplace/redhat-marketplace-gzpml" Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.181493 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03260891-c96f-4124-9a42-f42f6ddcf474-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"03260891-c96f-4124-9a42-f42f6ddcf474\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.181548 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d010741-ba9c-43b5-9dc3-87cb17d353d2-utilities\") pod \"redhat-marketplace-gzpml\" (UID: \"6d010741-ba9c-43b5-9dc3-87cb17d353d2\") " pod="openshift-marketplace/redhat-marketplace-gzpml" Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.181595 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03260891-c96f-4124-9a42-f42f6ddcf474-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"03260891-c96f-4124-9a42-f42f6ddcf474\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.181611 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x69x\" (UniqueName: \"kubernetes.io/projected/6d010741-ba9c-43b5-9dc3-87cb17d353d2-kube-api-access-8x69x\") pod \"redhat-marketplace-gzpml\" (UID: \"6d010741-ba9c-43b5-9dc3-87cb17d353d2\") " pod="openshift-marketplace/redhat-marketplace-gzpml" Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.267012 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-kvgs8" Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.269387 4667 patch_prober.go:28] interesting pod/router-default-5444994796-kvgs8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 03:50:39 crc kubenswrapper[4667]: [-]has-synced failed: reason withheld Jan 31 03:50:39 crc kubenswrapper[4667]: [+]process-running ok Jan 31 03:50:39 crc kubenswrapper[4667]: healthz check failed Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.269426 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kvgs8" podUID="ed8b36d0-771d-48bb-9393-db864ff8ff84" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.283150 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03260891-c96f-4124-9a42-f42f6ddcf474-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"03260891-c96f-4124-9a42-f42f6ddcf474\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.283181 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x69x\" (UniqueName: \"kubernetes.io/projected/6d010741-ba9c-43b5-9dc3-87cb17d353d2-kube-api-access-8x69x\") pod \"redhat-marketplace-gzpml\" (UID: \"6d010741-ba9c-43b5-9dc3-87cb17d353d2\") " pod="openshift-marketplace/redhat-marketplace-gzpml" Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.283206 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d010741-ba9c-43b5-9dc3-87cb17d353d2-catalog-content\") pod \"redhat-marketplace-gzpml\" (UID: \"6d010741-ba9c-43b5-9dc3-87cb17d353d2\") " pod="openshift-marketplace/redhat-marketplace-gzpml" Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.283229 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03260891-c96f-4124-9a42-f42f6ddcf474-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"03260891-c96f-4124-9a42-f42f6ddcf474\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.283274 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d010741-ba9c-43b5-9dc3-87cb17d353d2-utilities\") pod \"redhat-marketplace-gzpml\" (UID: \"6d010741-ba9c-43b5-9dc3-87cb17d353d2\") " pod="openshift-marketplace/redhat-marketplace-gzpml" Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.283808 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03260891-c96f-4124-9a42-f42f6ddcf474-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"03260891-c96f-4124-9a42-f42f6ddcf474\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.284003 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d010741-ba9c-43b5-9dc3-87cb17d353d2-utilities\") pod \"redhat-marketplace-gzpml\" (UID: \"6d010741-ba9c-43b5-9dc3-87cb17d353d2\") " pod="openshift-marketplace/redhat-marketplace-gzpml" Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.284101 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d010741-ba9c-43b5-9dc3-87cb17d353d2-catalog-content\") pod \"redhat-marketplace-gzpml\" (UID: \"6d010741-ba9c-43b5-9dc3-87cb17d353d2\") " pod="openshift-marketplace/redhat-marketplace-gzpml" Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.296381 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.297113 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-67p2b" Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.308612 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x69x\" (UniqueName: \"kubernetes.io/projected/6d010741-ba9c-43b5-9dc3-87cb17d353d2-kube-api-access-8x69x\") pod \"redhat-marketplace-gzpml\" (UID: \"6d010741-ba9c-43b5-9dc3-87cb17d353d2\") " pod="openshift-marketplace/redhat-marketplace-gzpml" Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.318325 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03260891-c96f-4124-9a42-f42f6ddcf474-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"03260891-c96f-4124-9a42-f42f6ddcf474\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.389817 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.457562 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gzpml" Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.507382 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qwhn5"] Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.519166 4667 generic.go:334] "Generic (PLEG): container finished" podID="6fc82b44-ef8d-4f7c-a022-fcbed68b1fab" containerID="e270008bbd46a4fc749202dcbb26e58d0db6ef8adff777b0af34a1308a3d3ce5" exitCode=0 Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.521915 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-52dpw" event={"ID":"8ac842ab-5970-4e4c-81ee-2f8b40d61416","Type":"ContainerStarted","Data":"22619f829f6de8217eace1f319c72679a525d463d87ca30e1117001eb4fa47e8"} Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.521949 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lh6xn" event={"ID":"6fc82b44-ef8d-4f7c-a022-fcbed68b1fab","Type":"ContainerDied","Data":"e270008bbd46a4fc749202dcbb26e58d0db6ef8adff777b0af34a1308a3d3ce5"} Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.521962 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lh6xn" event={"ID":"6fc82b44-ef8d-4f7c-a022-fcbed68b1fab","Type":"ContainerStarted","Data":"28e06d197924f8c794373cadcb36738993da210245c2ee08312c95c40287617b"} Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.522040 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qwhn5" Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.523894 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwhn5"] Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.530041 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-btrmv" event={"ID":"2720ee62-278e-4d32-924f-aa7401d1e7cb","Type":"ContainerStarted","Data":"60126b8ef646d6f2e3cbe9f6a98adfc0005b4ee202d439378b492421511115b3"} Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.543384 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" event={"ID":"5c9ccc10-6c02-463f-b2fd-a89fcacdb598","Type":"ContainerStarted","Data":"09e53d7f2ec4bafd8e57f3301ae6ca977607bbffbcb114ae89ecce4aeb868034"} Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.591047 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2afc75c6-1f04-4acb-b958-4159c2764e5e-utilities\") pod \"redhat-marketplace-qwhn5\" (UID: \"2afc75c6-1f04-4acb-b958-4159c2764e5e\") " pod="openshift-marketplace/redhat-marketplace-qwhn5" Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.591134 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2afc75c6-1f04-4acb-b958-4159c2764e5e-catalog-content\") pod \"redhat-marketplace-qwhn5\" (UID: \"2afc75c6-1f04-4acb-b958-4159c2764e5e\") " pod="openshift-marketplace/redhat-marketplace-qwhn5" Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.591166 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk2j7\" (UniqueName: \"kubernetes.io/projected/2afc75c6-1f04-4acb-b958-4159c2764e5e-kube-api-access-nk2j7\") pod \"redhat-marketplace-qwhn5\" (UID: \"2afc75c6-1f04-4acb-b958-4159c2764e5e\") " pod="openshift-marketplace/redhat-marketplace-qwhn5" Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.692432 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2afc75c6-1f04-4acb-b958-4159c2764e5e-utilities\") pod \"redhat-marketplace-qwhn5\" (UID: \"2afc75c6-1f04-4acb-b958-4159c2764e5e\") " pod="openshift-marketplace/redhat-marketplace-qwhn5" Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.692500 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2afc75c6-1f04-4acb-b958-4159c2764e5e-catalog-content\") pod \"redhat-marketplace-qwhn5\" (UID: \"2afc75c6-1f04-4acb-b958-4159c2764e5e\") " pod="openshift-marketplace/redhat-marketplace-qwhn5" Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.692522 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk2j7\" (UniqueName: \"kubernetes.io/projected/2afc75c6-1f04-4acb-b958-4159c2764e5e-kube-api-access-nk2j7\") pod \"redhat-marketplace-qwhn5\" (UID: \"2afc75c6-1f04-4acb-b958-4159c2764e5e\") " pod="openshift-marketplace/redhat-marketplace-qwhn5" Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.693157 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2afc75c6-1f04-4acb-b958-4159c2764e5e-utilities\") pod \"redhat-marketplace-qwhn5\" (UID: \"2afc75c6-1f04-4acb-b958-4159c2764e5e\") " pod="openshift-marketplace/redhat-marketplace-qwhn5" Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.693357 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2afc75c6-1f04-4acb-b958-4159c2764e5e-catalog-content\") pod \"redhat-marketplace-qwhn5\" (UID: \"2afc75c6-1f04-4acb-b958-4159c2764e5e\") " pod="openshift-marketplace/redhat-marketplace-qwhn5" Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.739485 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk2j7\" (UniqueName: \"kubernetes.io/projected/2afc75c6-1f04-4acb-b958-4159c2764e5e-kube-api-access-nk2j7\") pod \"redhat-marketplace-qwhn5\" (UID: \"2afc75c6-1f04-4acb-b958-4159c2764e5e\") " pod="openshift-marketplace/redhat-marketplace-qwhn5" Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.759662 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.760939 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.777278 4667 patch_prober.go:28] interesting pod/apiserver-76f77b778f-7txvq container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 31 03:50:39 crc kubenswrapper[4667]: [+]log ok Jan 31 03:50:39 crc kubenswrapper[4667]: [+]etcd ok Jan 31 03:50:39 crc kubenswrapper[4667]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 31 03:50:39 crc kubenswrapper[4667]: [+]poststarthook/generic-apiserver-start-informers ok Jan 31 03:50:39 crc kubenswrapper[4667]: [+]poststarthook/max-in-flight-filter ok Jan 31 03:50:39 crc kubenswrapper[4667]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 31 03:50:39 crc kubenswrapper[4667]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 31 03:50:39 crc kubenswrapper[4667]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 31 03:50:39 crc kubenswrapper[4667]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 31 03:50:39 crc kubenswrapper[4667]: [+]poststarthook/project.openshift.io-projectcache ok Jan 31 03:50:39 crc kubenswrapper[4667]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 31 03:50:39 crc kubenswrapper[4667]: [+]poststarthook/openshift.io-startinformers ok Jan 31 03:50:39 crc kubenswrapper[4667]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 31 03:50:39 crc kubenswrapper[4667]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 31 03:50:39 crc kubenswrapper[4667]: livez check failed Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.777371 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-7txvq" podUID="cbf59768-adfb-48f6-b68b-ebf1675f1807" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.836170 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gzpml"] Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.860135 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qwhn5" Jan 31 03:50:39 crc kubenswrapper[4667]: I0131 03:50:39.935044 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 31 03:50:40 crc kubenswrapper[4667]: I0131 03:50:40.108997 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fz7n8"] Jan 31 03:50:40 crc kubenswrapper[4667]: I0131 03:50:40.116959 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fz7n8" Jan 31 03:50:40 crc kubenswrapper[4667]: I0131 03:50:40.124262 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 31 03:50:40 crc kubenswrapper[4667]: I0131 03:50:40.133411 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fz7n8"] Jan 31 03:50:40 crc kubenswrapper[4667]: I0131 03:50:40.223637 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmrmj\" (UniqueName: \"kubernetes.io/projected/b6b3a151-c2e9-4461-92c3-b7752926f08c-kube-api-access-jmrmj\") pod \"redhat-operators-fz7n8\" (UID: \"b6b3a151-c2e9-4461-92c3-b7752926f08c\") " pod="openshift-marketplace/redhat-operators-fz7n8" Jan 31 03:50:40 crc kubenswrapper[4667]: I0131 03:50:40.223716 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6b3a151-c2e9-4461-92c3-b7752926f08c-catalog-content\") pod \"redhat-operators-fz7n8\" (UID: \"b6b3a151-c2e9-4461-92c3-b7752926f08c\") " pod="openshift-marketplace/redhat-operators-fz7n8" Jan 31 03:50:40 crc kubenswrapper[4667]: I0131 03:50:40.223749 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6b3a151-c2e9-4461-92c3-b7752926f08c-utilities\") pod \"redhat-operators-fz7n8\" (UID: \"b6b3a151-c2e9-4461-92c3-b7752926f08c\") " pod="openshift-marketplace/redhat-operators-fz7n8" Jan 31 03:50:40 crc kubenswrapper[4667]: I0131 03:50:40.289001 4667 patch_prober.go:28] interesting pod/router-default-5444994796-kvgs8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 03:50:40 crc kubenswrapper[4667]: [-]has-synced failed: reason withheld Jan 31 03:50:40 crc kubenswrapper[4667]: [+]process-running ok Jan 31 03:50:40 crc kubenswrapper[4667]: healthz check failed Jan 31 03:50:40 crc kubenswrapper[4667]: I0131 03:50:40.289063 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kvgs8" podUID="ed8b36d0-771d-48bb-9393-db864ff8ff84" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 03:50:40 crc kubenswrapper[4667]: I0131 03:50:40.328796 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6b3a151-c2e9-4461-92c3-b7752926f08c-catalog-content\") pod \"redhat-operators-fz7n8\" (UID: \"b6b3a151-c2e9-4461-92c3-b7752926f08c\") " pod="openshift-marketplace/redhat-operators-fz7n8" Jan 31 03:50:40 crc kubenswrapper[4667]: I0131 03:50:40.328867 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6b3a151-c2e9-4461-92c3-b7752926f08c-utilities\") pod \"redhat-operators-fz7n8\" (UID: \"b6b3a151-c2e9-4461-92c3-b7752926f08c\") " pod="openshift-marketplace/redhat-operators-fz7n8" Jan 31 03:50:40 crc kubenswrapper[4667]: I0131 03:50:40.328945 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmrmj\" (UniqueName: \"kubernetes.io/projected/b6b3a151-c2e9-4461-92c3-b7752926f08c-kube-api-access-jmrmj\") pod \"redhat-operators-fz7n8\" (UID: \"b6b3a151-c2e9-4461-92c3-b7752926f08c\") " pod="openshift-marketplace/redhat-operators-fz7n8" Jan 31 03:50:40 crc kubenswrapper[4667]: I0131 03:50:40.330101 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6b3a151-c2e9-4461-92c3-b7752926f08c-catalog-content\") pod \"redhat-operators-fz7n8\" (UID: \"b6b3a151-c2e9-4461-92c3-b7752926f08c\") " pod="openshift-marketplace/redhat-operators-fz7n8" Jan 31 03:50:40 crc kubenswrapper[4667]: I0131 03:50:40.333589 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6b3a151-c2e9-4461-92c3-b7752926f08c-utilities\") pod \"redhat-operators-fz7n8\" (UID: \"b6b3a151-c2e9-4461-92c3-b7752926f08c\") " pod="openshift-marketplace/redhat-operators-fz7n8" Jan 31 03:50:40 crc kubenswrapper[4667]: I0131 03:50:40.376704 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmrmj\" (UniqueName: \"kubernetes.io/projected/b6b3a151-c2e9-4461-92c3-b7752926f08c-kube-api-access-jmrmj\") pod \"redhat-operators-fz7n8\" (UID: \"b6b3a151-c2e9-4461-92c3-b7752926f08c\") " pod="openshift-marketplace/redhat-operators-fz7n8" Jan 31 03:50:40 crc kubenswrapper[4667]: I0131 03:50:40.421969 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwhn5"] Jan 31 03:50:40 crc kubenswrapper[4667]: I0131 03:50:40.477296 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fz7n8" Jan 31 03:50:40 crc kubenswrapper[4667]: I0131 03:50:40.493183 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5pmpf"] Jan 31 03:50:40 crc kubenswrapper[4667]: I0131 03:50:40.494067 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5pmpf" Jan 31 03:50:40 crc kubenswrapper[4667]: I0131 03:50:40.524665 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5pmpf"] Jan 31 03:50:40 crc kubenswrapper[4667]: I0131 03:50:40.537867 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd00c210-5665-4575-a5a3-413a89f5c03a-catalog-content\") pod \"redhat-operators-5pmpf\" (UID: \"fd00c210-5665-4575-a5a3-413a89f5c03a\") " pod="openshift-marketplace/redhat-operators-5pmpf" Jan 31 03:50:40 crc kubenswrapper[4667]: I0131 03:50:40.537930 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd00c210-5665-4575-a5a3-413a89f5c03a-utilities\") pod \"redhat-operators-5pmpf\" (UID: \"fd00c210-5665-4575-a5a3-413a89f5c03a\") " pod="openshift-marketplace/redhat-operators-5pmpf" Jan 31 03:50:40 crc kubenswrapper[4667]: I0131 03:50:40.563588 4667 generic.go:334] "Generic (PLEG): container finished" podID="8ac842ab-5970-4e4c-81ee-2f8b40d61416" containerID="835895e1b1da40ff7ed1ffa80d61bc766b4099e15542571008b3a1b336b30b94" exitCode=0 Jan 31 03:50:40 crc kubenswrapper[4667]: I0131 03:50:40.563725 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-52dpw" event={"ID":"8ac842ab-5970-4e4c-81ee-2f8b40d61416","Type":"ContainerDied","Data":"835895e1b1da40ff7ed1ffa80d61bc766b4099e15542571008b3a1b336b30b94"} Jan 31 03:50:40 crc kubenswrapper[4667]: I0131 03:50:40.574974 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gzpml" event={"ID":"6d010741-ba9c-43b5-9dc3-87cb17d353d2","Type":"ContainerStarted","Data":"dec63d8b1db1a1692512f25ff1e018cd9d79140cc703c764f386765238354a75"} Jan 31 03:50:40 crc kubenswrapper[4667]: I0131 03:50:40.581013 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwhn5" event={"ID":"2afc75c6-1f04-4acb-b958-4159c2764e5e","Type":"ContainerStarted","Data":"d5763716cac981a726cbf98d3391701b2c4c30ce50323806b5e7ce2fdf2ce096"} Jan 31 03:50:40 crc kubenswrapper[4667]: I0131 03:50:40.594986 4667 generic.go:334] "Generic (PLEG): container finished" podID="2720ee62-278e-4d32-924f-aa7401d1e7cb" containerID="cb84d7ab1c369dfe53a153134d9e0fcc84637b52dbd1c111bca29911b2b15d86" exitCode=0 Jan 31 03:50:40 crc kubenswrapper[4667]: I0131 03:50:40.595045 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-btrmv" event={"ID":"2720ee62-278e-4d32-924f-aa7401d1e7cb","Type":"ContainerDied","Data":"cb84d7ab1c369dfe53a153134d9e0fcc84637b52dbd1c111bca29911b2b15d86"} Jan 31 03:50:40 crc kubenswrapper[4667]: I0131 03:50:40.601416 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"03260891-c96f-4124-9a42-f42f6ddcf474","Type":"ContainerStarted","Data":"fec147b47ccc8441138ed75d0200daabe2e5572cb585e6b9664a64925bbae4c7"} Jan 31 03:50:40 crc kubenswrapper[4667]: I0131 03:50:40.605896 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" event={"ID":"5c9ccc10-6c02-463f-b2fd-a89fcacdb598","Type":"ContainerStarted","Data":"7bf7bf59f185b8e33e003d891a86c857db863f0d9832f6349e366d20b72087c7"} Jan 31 03:50:40 crc kubenswrapper[4667]: I0131 03:50:40.638790 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99562\" (UniqueName: \"kubernetes.io/projected/fd00c210-5665-4575-a5a3-413a89f5c03a-kube-api-access-99562\") pod \"redhat-operators-5pmpf\" (UID: \"fd00c210-5665-4575-a5a3-413a89f5c03a\") " pod="openshift-marketplace/redhat-operators-5pmpf" Jan 31 03:50:40 crc kubenswrapper[4667]: I0131 03:50:40.638897 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd00c210-5665-4575-a5a3-413a89f5c03a-catalog-content\") pod \"redhat-operators-5pmpf\" (UID: \"fd00c210-5665-4575-a5a3-413a89f5c03a\") " pod="openshift-marketplace/redhat-operators-5pmpf" Jan 31 03:50:40 crc kubenswrapper[4667]: I0131 03:50:40.638919 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd00c210-5665-4575-a5a3-413a89f5c03a-utilities\") pod \"redhat-operators-5pmpf\" (UID: \"fd00c210-5665-4575-a5a3-413a89f5c03a\") " pod="openshift-marketplace/redhat-operators-5pmpf" Jan 31 03:50:40 crc kubenswrapper[4667]: I0131 03:50:40.639936 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd00c210-5665-4575-a5a3-413a89f5c03a-catalog-content\") pod \"redhat-operators-5pmpf\" (UID: \"fd00c210-5665-4575-a5a3-413a89f5c03a\") " pod="openshift-marketplace/redhat-operators-5pmpf" Jan 31 03:50:40 crc kubenswrapper[4667]: I0131 03:50:40.640191 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd00c210-5665-4575-a5a3-413a89f5c03a-utilities\") pod \"redhat-operators-5pmpf\" (UID: \"fd00c210-5665-4575-a5a3-413a89f5c03a\") " pod="openshift-marketplace/redhat-operators-5pmpf" Jan 31 03:50:40 crc kubenswrapper[4667]: I0131 03:50:40.739869 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99562\" (UniqueName: \"kubernetes.io/projected/fd00c210-5665-4575-a5a3-413a89f5c03a-kube-api-access-99562\") pod \"redhat-operators-5pmpf\" (UID: \"fd00c210-5665-4575-a5a3-413a89f5c03a\") " pod="openshift-marketplace/redhat-operators-5pmpf" Jan 31 03:50:40 crc kubenswrapper[4667]: I0131 03:50:40.762581 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99562\" (UniqueName: \"kubernetes.io/projected/fd00c210-5665-4575-a5a3-413a89f5c03a-kube-api-access-99562\") pod \"redhat-operators-5pmpf\" (UID: \"fd00c210-5665-4575-a5a3-413a89f5c03a\") " pod="openshift-marketplace/redhat-operators-5pmpf" Jan 31 03:50:40 crc kubenswrapper[4667]: I0131 03:50:40.823407 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fz7n8"] Jan 31 03:50:40 crc kubenswrapper[4667]: I0131 03:50:40.832851 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5pmpf" Jan 31 03:50:40 crc kubenswrapper[4667]: W0131 03:50:40.836558 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6b3a151_c2e9_4461_92c3_b7752926f08c.slice/crio-87c555801fefda758a455f84fb9013f9697af801af3b077d2f58d9144a57740c WatchSource:0}: Error finding container 87c555801fefda758a455f84fb9013f9697af801af3b077d2f58d9144a57740c: Status 404 returned error can't find the container with id 87c555801fefda758a455f84fb9013f9697af801af3b077d2f58d9144a57740c Jan 31 03:50:41 crc kubenswrapper[4667]: I0131 03:50:41.118060 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5pmpf"] Jan 31 03:50:41 crc kubenswrapper[4667]: I0131 03:50:41.268853 4667 patch_prober.go:28] interesting pod/router-default-5444994796-kvgs8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 03:50:41 crc kubenswrapper[4667]: [-]has-synced failed: reason withheld Jan 31 03:50:41 crc kubenswrapper[4667]: [+]process-running ok Jan 31 03:50:41 crc kubenswrapper[4667]: healthz check failed Jan 31 03:50:41 crc kubenswrapper[4667]: I0131 03:50:41.268913 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kvgs8" podUID="ed8b36d0-771d-48bb-9393-db864ff8ff84" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 03:50:41 crc kubenswrapper[4667]: I0131 03:50:41.614992 4667 generic.go:334] "Generic (PLEG): container finished" podID="fd00c210-5665-4575-a5a3-413a89f5c03a" containerID="c7da08564baa6bef2b68e9f2a7eb8db95e62f86c2bcad37cf89d40606e3162b9" exitCode=0 Jan 31 03:50:41 crc kubenswrapper[4667]: I0131 03:50:41.615055 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pmpf" event={"ID":"fd00c210-5665-4575-a5a3-413a89f5c03a","Type":"ContainerDied","Data":"c7da08564baa6bef2b68e9f2a7eb8db95e62f86c2bcad37cf89d40606e3162b9"} Jan 31 03:50:41 crc kubenswrapper[4667]: I0131 03:50:41.615080 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pmpf" event={"ID":"fd00c210-5665-4575-a5a3-413a89f5c03a","Type":"ContainerStarted","Data":"2b2e9b443ad3fd9d3d4916a4f1395ac31bab9d9eb0b2bc974943fdcbdf07f55e"} Jan 31 03:50:41 crc kubenswrapper[4667]: I0131 03:50:41.616626 4667 generic.go:334] "Generic (PLEG): container finished" podID="145e5e24-2f94-48b2-be05-b08dbbb09312" containerID="bd4582241bfb08235ad49f9224238c8abad1554ad5555edf41cc3df1d03882a8" exitCode=0 Jan 31 03:50:41 crc kubenswrapper[4667]: I0131 03:50:41.616677 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497185-9b5gp" event={"ID":"145e5e24-2f94-48b2-be05-b08dbbb09312","Type":"ContainerDied","Data":"bd4582241bfb08235ad49f9224238c8abad1554ad5555edf41cc3df1d03882a8"} Jan 31 03:50:41 crc kubenswrapper[4667]: I0131 03:50:41.618990 4667 generic.go:334] "Generic (PLEG): container finished" podID="6d010741-ba9c-43b5-9dc3-87cb17d353d2" containerID="df2a65378bc1f97e5ceba68199aaeaeb15a854bac449b7f6823a0b56d1cc2a0f" exitCode=0 Jan 31 03:50:41 crc kubenswrapper[4667]: I0131 03:50:41.619031 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gzpml" event={"ID":"6d010741-ba9c-43b5-9dc3-87cb17d353d2","Type":"ContainerDied","Data":"df2a65378bc1f97e5ceba68199aaeaeb15a854bac449b7f6823a0b56d1cc2a0f"} Jan 31 03:50:41 crc kubenswrapper[4667]: I0131 03:50:41.624811 4667 generic.go:334] "Generic (PLEG): container finished" podID="2afc75c6-1f04-4acb-b958-4159c2764e5e" containerID="eb472256f9efbdf663c1aed2cf448b3ef9a2e3b2d1c0cefebb56ee7289ed6713" exitCode=0 Jan 31 03:50:41 crc kubenswrapper[4667]: I0131 03:50:41.624882 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwhn5" event={"ID":"2afc75c6-1f04-4acb-b958-4159c2764e5e","Type":"ContainerDied","Data":"eb472256f9efbdf663c1aed2cf448b3ef9a2e3b2d1c0cefebb56ee7289ed6713"} Jan 31 03:50:41 crc kubenswrapper[4667]: I0131 03:50:41.627479 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fz7n8" event={"ID":"b6b3a151-c2e9-4461-92c3-b7752926f08c","Type":"ContainerStarted","Data":"89556686eaa50729596db2202c0955c41b5e9aeff83750d55288f27d240e2957"} Jan 31 03:50:41 crc kubenswrapper[4667]: I0131 03:50:41.627513 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fz7n8" event={"ID":"b6b3a151-c2e9-4461-92c3-b7752926f08c","Type":"ContainerStarted","Data":"87c555801fefda758a455f84fb9013f9697af801af3b077d2f58d9144a57740c"} Jan 31 03:50:41 crc kubenswrapper[4667]: I0131 03:50:41.635988 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"03260891-c96f-4124-9a42-f42f6ddcf474","Type":"ContainerStarted","Data":"68636516e026c12f8720f78bc9f7e5b8421999b7627bc78137845497363848ae"} Jan 31 03:50:41 crc kubenswrapper[4667]: I0131 03:50:41.702302 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" podStartSLOduration=138.702281087 podStartE2EDuration="2m18.702281087s" podCreationTimestamp="2026-01-31 03:48:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:41.702134443 +0000 UTC m=+165.218469752" watchObservedRunningTime="2026-01-31 03:50:41.702281087 +0000 UTC m=+165.218616396" Jan 31 03:50:41 crc kubenswrapper[4667]: I0131 03:50:41.724263 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.724246158 podStartE2EDuration="2.724246158s" podCreationTimestamp="2026-01-31 03:50:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:41.720964352 +0000 UTC m=+165.237299661" watchObservedRunningTime="2026-01-31 03:50:41.724246158 +0000 UTC m=+165.240581457" Jan 31 03:50:42 crc kubenswrapper[4667]: I0131 03:50:42.267300 4667 patch_prober.go:28] interesting pod/router-default-5444994796-kvgs8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 03:50:42 crc kubenswrapper[4667]: [-]has-synced failed: reason withheld Jan 31 03:50:42 crc kubenswrapper[4667]: [+]process-running ok Jan 31 03:50:42 crc kubenswrapper[4667]: healthz check failed Jan 31 03:50:42 crc kubenswrapper[4667]: I0131 03:50:42.267685 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kvgs8" podUID="ed8b36d0-771d-48bb-9393-db864ff8ff84" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 03:50:42 crc kubenswrapper[4667]: I0131 03:50:42.642717 4667 generic.go:334] "Generic (PLEG): container finished" podID="b6b3a151-c2e9-4461-92c3-b7752926f08c" containerID="89556686eaa50729596db2202c0955c41b5e9aeff83750d55288f27d240e2957" exitCode=0 Jan 31 03:50:42 crc kubenswrapper[4667]: I0131 03:50:42.642789 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fz7n8" event={"ID":"b6b3a151-c2e9-4461-92c3-b7752926f08c","Type":"ContainerDied","Data":"89556686eaa50729596db2202c0955c41b5e9aeff83750d55288f27d240e2957"} Jan 31 03:50:42 crc kubenswrapper[4667]: I0131 03:50:42.657519 4667 generic.go:334] "Generic (PLEG): container finished" podID="03260891-c96f-4124-9a42-f42f6ddcf474" containerID="68636516e026c12f8720f78bc9f7e5b8421999b7627bc78137845497363848ae" exitCode=0 Jan 31 03:50:42 crc kubenswrapper[4667]: I0131 03:50:42.658470 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"03260891-c96f-4124-9a42-f42f6ddcf474","Type":"ContainerDied","Data":"68636516e026c12f8720f78bc9f7e5b8421999b7627bc78137845497363848ae"} Jan 31 03:50:43 crc kubenswrapper[4667]: I0131 03:50:43.140942 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497185-9b5gp" Jan 31 03:50:43 crc kubenswrapper[4667]: I0131 03:50:43.268077 4667 patch_prober.go:28] interesting pod/router-default-5444994796-kvgs8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 03:50:43 crc kubenswrapper[4667]: [-]has-synced failed: reason withheld Jan 31 03:50:43 crc kubenswrapper[4667]: [+]process-running ok Jan 31 03:50:43 crc kubenswrapper[4667]: healthz check failed Jan 31 03:50:43 crc kubenswrapper[4667]: I0131 03:50:43.268153 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kvgs8" podUID="ed8b36d0-771d-48bb-9393-db864ff8ff84" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 03:50:43 crc kubenswrapper[4667]: I0131 03:50:43.289372 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/145e5e24-2f94-48b2-be05-b08dbbb09312-secret-volume\") pod \"145e5e24-2f94-48b2-be05-b08dbbb09312\" (UID: \"145e5e24-2f94-48b2-be05-b08dbbb09312\") " Jan 31 03:50:43 crc kubenswrapper[4667]: I0131 03:50:43.289453 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/145e5e24-2f94-48b2-be05-b08dbbb09312-config-volume\") pod \"145e5e24-2f94-48b2-be05-b08dbbb09312\" (UID: \"145e5e24-2f94-48b2-be05-b08dbbb09312\") " Jan 31 03:50:43 crc kubenswrapper[4667]: I0131 03:50:43.289493 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nnqh\" (UniqueName: \"kubernetes.io/projected/145e5e24-2f94-48b2-be05-b08dbbb09312-kube-api-access-9nnqh\") pod \"145e5e24-2f94-48b2-be05-b08dbbb09312\" (UID: \"145e5e24-2f94-48b2-be05-b08dbbb09312\") " Jan 31 03:50:43 crc kubenswrapper[4667]: I0131 03:50:43.291012 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/145e5e24-2f94-48b2-be05-b08dbbb09312-config-volume" (OuterVolumeSpecName: "config-volume") pod "145e5e24-2f94-48b2-be05-b08dbbb09312" (UID: "145e5e24-2f94-48b2-be05-b08dbbb09312"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:50:43 crc kubenswrapper[4667]: I0131 03:50:43.323468 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/145e5e24-2f94-48b2-be05-b08dbbb09312-kube-api-access-9nnqh" (OuterVolumeSpecName: "kube-api-access-9nnqh") pod "145e5e24-2f94-48b2-be05-b08dbbb09312" (UID: "145e5e24-2f94-48b2-be05-b08dbbb09312"). InnerVolumeSpecName "kube-api-access-9nnqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:50:43 crc kubenswrapper[4667]: I0131 03:50:43.332409 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/145e5e24-2f94-48b2-be05-b08dbbb09312-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "145e5e24-2f94-48b2-be05-b08dbbb09312" (UID: "145e5e24-2f94-48b2-be05-b08dbbb09312"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:50:43 crc kubenswrapper[4667]: I0131 03:50:43.390687 4667 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/145e5e24-2f94-48b2-be05-b08dbbb09312-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 03:50:43 crc kubenswrapper[4667]: I0131 03:50:43.390713 4667 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/145e5e24-2f94-48b2-be05-b08dbbb09312-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 03:50:43 crc kubenswrapper[4667]: I0131 03:50:43.390722 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nnqh\" (UniqueName: \"kubernetes.io/projected/145e5e24-2f94-48b2-be05-b08dbbb09312-kube-api-access-9nnqh\") on node \"crc\" DevicePath \"\"" Jan 31 03:50:43 crc kubenswrapper[4667]: I0131 03:50:43.689355 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497185-9b5gp" event={"ID":"145e5e24-2f94-48b2-be05-b08dbbb09312","Type":"ContainerDied","Data":"88e41582cab244056dd314db637c414b19dfac8944186154d07e400f31d6b0c0"} Jan 31 03:50:43 crc kubenswrapper[4667]: I0131 03:50:43.689500 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88e41582cab244056dd314db637c414b19dfac8944186154d07e400f31d6b0c0" Jan 31 03:50:43 crc kubenswrapper[4667]: I0131 03:50:43.689413 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497185-9b5gp" Jan 31 03:50:44 crc kubenswrapper[4667]: I0131 03:50:44.114605 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-kfr9j" Jan 31 03:50:44 crc kubenswrapper[4667]: I0131 03:50:44.270498 4667 patch_prober.go:28] interesting pod/router-default-5444994796-kvgs8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 03:50:44 crc kubenswrapper[4667]: [-]has-synced failed: reason withheld Jan 31 03:50:44 crc kubenswrapper[4667]: [+]process-running ok Jan 31 03:50:44 crc kubenswrapper[4667]: healthz check failed Jan 31 03:50:44 crc kubenswrapper[4667]: I0131 03:50:44.270570 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kvgs8" podUID="ed8b36d0-771d-48bb-9393-db864ff8ff84" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 03:50:44 crc kubenswrapper[4667]: I0131 03:50:44.308798 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 03:50:44 crc kubenswrapper[4667]: I0131 03:50:44.333771 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03260891-c96f-4124-9a42-f42f6ddcf474-kubelet-dir\") pod \"03260891-c96f-4124-9a42-f42f6ddcf474\" (UID: \"03260891-c96f-4124-9a42-f42f6ddcf474\") " Jan 31 03:50:44 crc kubenswrapper[4667]: I0131 03:50:44.333971 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03260891-c96f-4124-9a42-f42f6ddcf474-kube-api-access\") pod \"03260891-c96f-4124-9a42-f42f6ddcf474\" (UID: \"03260891-c96f-4124-9a42-f42f6ddcf474\") " Jan 31 03:50:44 crc kubenswrapper[4667]: I0131 03:50:44.334992 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03260891-c96f-4124-9a42-f42f6ddcf474-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "03260891-c96f-4124-9a42-f42f6ddcf474" (UID: "03260891-c96f-4124-9a42-f42f6ddcf474"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:50:44 crc kubenswrapper[4667]: I0131 03:50:44.362249 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03260891-c96f-4124-9a42-f42f6ddcf474-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "03260891-c96f-4124-9a42-f42f6ddcf474" (UID: "03260891-c96f-4124-9a42-f42f6ddcf474"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:50:44 crc kubenswrapper[4667]: I0131 03:50:44.435162 4667 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03260891-c96f-4124-9a42-f42f6ddcf474-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 03:50:44 crc kubenswrapper[4667]: I0131 03:50:44.435209 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03260891-c96f-4124-9a42-f42f6ddcf474-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 03:50:44 crc kubenswrapper[4667]: I0131 03:50:44.767160 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:44 crc kubenswrapper[4667]: I0131 03:50:44.775421 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-7txvq" Jan 31 03:50:44 crc kubenswrapper[4667]: I0131 03:50:44.826557 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"03260891-c96f-4124-9a42-f42f6ddcf474","Type":"ContainerDied","Data":"fec147b47ccc8441138ed75d0200daabe2e5572cb585e6b9664a64925bbae4c7"} Jan 31 03:50:44 crc kubenswrapper[4667]: I0131 03:50:44.826615 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fec147b47ccc8441138ed75d0200daabe2e5572cb585e6b9664a64925bbae4c7" Jan 31 03:50:44 crc kubenswrapper[4667]: I0131 03:50:44.826709 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 03:50:45 crc kubenswrapper[4667]: I0131 03:50:45.118914 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 31 03:50:45 crc kubenswrapper[4667]: E0131 03:50:45.119482 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="145e5e24-2f94-48b2-be05-b08dbbb09312" containerName="collect-profiles" Jan 31 03:50:45 crc kubenswrapper[4667]: I0131 03:50:45.119503 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="145e5e24-2f94-48b2-be05-b08dbbb09312" containerName="collect-profiles" Jan 31 03:50:45 crc kubenswrapper[4667]: E0131 03:50:45.119540 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03260891-c96f-4124-9a42-f42f6ddcf474" containerName="pruner" Jan 31 03:50:45 crc kubenswrapper[4667]: I0131 03:50:45.119548 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="03260891-c96f-4124-9a42-f42f6ddcf474" containerName="pruner" Jan 31 03:50:45 crc kubenswrapper[4667]: I0131 03:50:45.119753 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="145e5e24-2f94-48b2-be05-b08dbbb09312" containerName="collect-profiles" Jan 31 03:50:45 crc kubenswrapper[4667]: I0131 03:50:45.119776 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="03260891-c96f-4124-9a42-f42f6ddcf474" containerName="pruner" Jan 31 03:50:45 crc kubenswrapper[4667]: I0131 03:50:45.120476 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 03:50:45 crc kubenswrapper[4667]: I0131 03:50:45.146702 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 31 03:50:45 crc kubenswrapper[4667]: I0131 03:50:45.147054 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 31 03:50:45 crc kubenswrapper[4667]: I0131 03:50:45.162809 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 31 03:50:45 crc kubenswrapper[4667]: I0131 03:50:45.260588 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cc9c9a49-4902-42ce-8adf-9d24cad698e7-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"cc9c9a49-4902-42ce-8adf-9d24cad698e7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 03:50:45 crc kubenswrapper[4667]: I0131 03:50:45.260665 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cc9c9a49-4902-42ce-8adf-9d24cad698e7-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"cc9c9a49-4902-42ce-8adf-9d24cad698e7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 03:50:45 crc kubenswrapper[4667]: I0131 03:50:45.277991 4667 patch_prober.go:28] interesting pod/router-default-5444994796-kvgs8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 03:50:45 crc kubenswrapper[4667]: [-]has-synced failed: reason withheld Jan 31 03:50:45 crc kubenswrapper[4667]: [+]process-running ok Jan 31 03:50:45 crc kubenswrapper[4667]: healthz check failed Jan 31 03:50:45 crc kubenswrapper[4667]: I0131 03:50:45.278102 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kvgs8" podUID="ed8b36d0-771d-48bb-9393-db864ff8ff84" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 03:50:45 crc kubenswrapper[4667]: I0131 03:50:45.362824 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cc9c9a49-4902-42ce-8adf-9d24cad698e7-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"cc9c9a49-4902-42ce-8adf-9d24cad698e7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 03:50:45 crc kubenswrapper[4667]: I0131 03:50:45.363130 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cc9c9a49-4902-42ce-8adf-9d24cad698e7-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"cc9c9a49-4902-42ce-8adf-9d24cad698e7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 03:50:45 crc kubenswrapper[4667]: I0131 03:50:45.364262 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cc9c9a49-4902-42ce-8adf-9d24cad698e7-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"cc9c9a49-4902-42ce-8adf-9d24cad698e7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 03:50:45 crc kubenswrapper[4667]: I0131 03:50:45.387996 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cc9c9a49-4902-42ce-8adf-9d24cad698e7-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"cc9c9a49-4902-42ce-8adf-9d24cad698e7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 03:50:45 crc kubenswrapper[4667]: I0131 03:50:45.474751 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 03:50:45 crc kubenswrapper[4667]: I0131 03:50:45.668199 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a24385e-62ca-4a82-8995-9f20115931c4-metrics-certs\") pod \"network-metrics-daemon-n5jv7\" (UID: \"4a24385e-62ca-4a82-8995-9f20115931c4\") " pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:50:45 crc kubenswrapper[4667]: I0131 03:50:45.673429 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a24385e-62ca-4a82-8995-9f20115931c4-metrics-certs\") pod \"network-metrics-daemon-n5jv7\" (UID: \"4a24385e-62ca-4a82-8995-9f20115931c4\") " pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:50:45 crc kubenswrapper[4667]: I0131 03:50:45.709479 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 03:50:45 crc kubenswrapper[4667]: I0131 03:50:45.709528 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 03:50:45 crc kubenswrapper[4667]: I0131 03:50:45.934367 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5jv7" Jan 31 03:50:46 crc kubenswrapper[4667]: I0131 03:50:46.279048 4667 patch_prober.go:28] interesting pod/router-default-5444994796-kvgs8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 03:50:46 crc kubenswrapper[4667]: [-]has-synced failed: reason withheld Jan 31 03:50:46 crc kubenswrapper[4667]: [+]process-running ok Jan 31 03:50:46 crc kubenswrapper[4667]: healthz check failed Jan 31 03:50:46 crc kubenswrapper[4667]: I0131 03:50:46.279432 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kvgs8" podUID="ed8b36d0-771d-48bb-9393-db864ff8ff84" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 03:50:46 crc kubenswrapper[4667]: I0131 03:50:46.501038 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 31 03:50:46 crc kubenswrapper[4667]: I0131 03:50:46.607898 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-n5jv7"] Jan 31 03:50:46 crc kubenswrapper[4667]: W0131 03:50:46.712895 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a24385e_62ca_4a82_8995_9f20115931c4.slice/crio-351746a81acd143358baf30c4eb3409b0ae90a22d78b9d7af2870ed082755dfb WatchSource:0}: Error finding container 351746a81acd143358baf30c4eb3409b0ae90a22d78b9d7af2870ed082755dfb: Status 404 returned error can't find the container with id 351746a81acd143358baf30c4eb3409b0ae90a22d78b9d7af2870ed082755dfb Jan 31 03:50:46 crc kubenswrapper[4667]: I0131 03:50:46.882095 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cc9c9a49-4902-42ce-8adf-9d24cad698e7","Type":"ContainerStarted","Data":"7b147c2c68ab1a66955b5f82f1fca3fd934a738caabb957a974c9d8476e58c75"} Jan 31 03:50:46 crc kubenswrapper[4667]: I0131 03:50:46.882966 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-n5jv7" event={"ID":"4a24385e-62ca-4a82-8995-9f20115931c4","Type":"ContainerStarted","Data":"351746a81acd143358baf30c4eb3409b0ae90a22d78b9d7af2870ed082755dfb"} Jan 31 03:50:47 crc kubenswrapper[4667]: I0131 03:50:47.260830 4667 patch_prober.go:28] interesting pod/downloads-7954f5f757-5zj2q container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" start-of-body= Jan 31 03:50:47 crc kubenswrapper[4667]: I0131 03:50:47.261177 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-5zj2q" podUID="745b1e30-1f16-4539-847b-88db36eb6d4b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" Jan 31 03:50:47 crc kubenswrapper[4667]: I0131 03:50:47.264103 4667 patch_prober.go:28] interesting pod/downloads-7954f5f757-5zj2q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" start-of-body= Jan 31 03:50:47 crc kubenswrapper[4667]: I0131 03:50:47.264140 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5zj2q" podUID="745b1e30-1f16-4539-847b-88db36eb6d4b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.5:8080/\": dial tcp 10.217.0.5:8080: connect: connection refused" Jan 31 03:50:47 crc kubenswrapper[4667]: I0131 03:50:47.271441 4667 patch_prober.go:28] interesting pod/router-default-5444994796-kvgs8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 03:50:47 crc kubenswrapper[4667]: [-]has-synced failed: reason withheld Jan 31 03:50:47 crc kubenswrapper[4667]: [+]process-running ok Jan 31 03:50:47 crc kubenswrapper[4667]: healthz check failed Jan 31 03:50:47 crc kubenswrapper[4667]: I0131 03:50:47.271493 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kvgs8" podUID="ed8b36d0-771d-48bb-9393-db864ff8ff84" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 03:50:48 crc kubenswrapper[4667]: I0131 03:50:48.268618 4667 patch_prober.go:28] interesting pod/router-default-5444994796-kvgs8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 03:50:48 crc kubenswrapper[4667]: [-]has-synced failed: reason withheld Jan 31 03:50:48 crc kubenswrapper[4667]: [+]process-running ok Jan 31 03:50:48 crc kubenswrapper[4667]: healthz check failed Jan 31 03:50:48 crc kubenswrapper[4667]: I0131 03:50:48.268738 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kvgs8" podUID="ed8b36d0-771d-48bb-9393-db864ff8ff84" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 03:50:48 crc kubenswrapper[4667]: I0131 03:50:48.479622 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:48 crc kubenswrapper[4667]: I0131 03:50:48.844749 4667 patch_prober.go:28] interesting pod/console-f9d7485db-wjsth container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Jan 31 03:50:48 crc kubenswrapper[4667]: I0131 03:50:48.845155 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-wjsth" podUID="4f281370-6419-4dfb-b21f-9d1c9c7eddaa" containerName="console" probeResult="failure" output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" Jan 31 03:50:48 crc kubenswrapper[4667]: I0131 03:50:48.962992 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cc9c9a49-4902-42ce-8adf-9d24cad698e7","Type":"ContainerStarted","Data":"2ae28eec31be87ac95d119afc9f0fc78c14620a36b15449284493b9524503716"} Jan 31 03:50:48 crc kubenswrapper[4667]: I0131 03:50:48.967705 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-n5jv7" event={"ID":"4a24385e-62ca-4a82-8995-9f20115931c4","Type":"ContainerStarted","Data":"91b49ad8ea010c1a0ea9fe331211369a7ee43392a833308ef855833ece3d171a"} Jan 31 03:50:48 crc kubenswrapper[4667]: I0131 03:50:48.984986 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.984973982 podStartE2EDuration="3.984973982s" podCreationTimestamp="2026-01-31 03:50:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:48.973702569 +0000 UTC m=+172.490037858" watchObservedRunningTime="2026-01-31 03:50:48.984973982 +0000 UTC m=+172.501309281" Jan 31 03:50:49 crc kubenswrapper[4667]: I0131 03:50:49.268553 4667 patch_prober.go:28] interesting pod/router-default-5444994796-kvgs8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 03:50:49 crc kubenswrapper[4667]: [-]has-synced failed: reason withheld Jan 31 03:50:49 crc kubenswrapper[4667]: [+]process-running ok Jan 31 03:50:49 crc kubenswrapper[4667]: healthz check failed Jan 31 03:50:49 crc kubenswrapper[4667]: I0131 03:50:49.268618 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kvgs8" podUID="ed8b36d0-771d-48bb-9393-db864ff8ff84" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 03:50:50 crc kubenswrapper[4667]: I0131 03:50:50.008384 4667 generic.go:334] "Generic (PLEG): container finished" podID="cc9c9a49-4902-42ce-8adf-9d24cad698e7" containerID="2ae28eec31be87ac95d119afc9f0fc78c14620a36b15449284493b9524503716" exitCode=0 Jan 31 03:50:50 crc kubenswrapper[4667]: I0131 03:50:50.008486 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cc9c9a49-4902-42ce-8adf-9d24cad698e7","Type":"ContainerDied","Data":"2ae28eec31be87ac95d119afc9f0fc78c14620a36b15449284493b9524503716"} Jan 31 03:50:50 crc kubenswrapper[4667]: I0131 03:50:50.268655 4667 patch_prober.go:28] interesting pod/router-default-5444994796-kvgs8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 03:50:50 crc kubenswrapper[4667]: [+]has-synced ok Jan 31 03:50:50 crc kubenswrapper[4667]: [+]process-running ok Jan 31 03:50:50 crc kubenswrapper[4667]: healthz check failed Jan 31 03:50:50 crc kubenswrapper[4667]: I0131 03:50:50.268724 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kvgs8" podUID="ed8b36d0-771d-48bb-9393-db864ff8ff84" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 03:50:51 crc kubenswrapper[4667]: I0131 03:50:51.043806 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-n5jv7" event={"ID":"4a24385e-62ca-4a82-8995-9f20115931c4","Type":"ContainerStarted","Data":"6a2c8f5c77132c980cf37c2dfd0735d5c5a7e2e6f90c0734afa1465a77fbf443"} Jan 31 03:50:51 crc kubenswrapper[4667]: I0131 03:50:51.058973 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-n5jv7" podStartSLOduration=148.058955818 podStartE2EDuration="2m28.058955818s" podCreationTimestamp="2026-01-31 03:48:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:50:51.056593447 +0000 UTC m=+174.572928746" watchObservedRunningTime="2026-01-31 03:50:51.058955818 +0000 UTC m=+174.575291117" Jan 31 03:50:51 crc kubenswrapper[4667]: I0131 03:50:51.270493 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-kvgs8" Jan 31 03:50:51 crc kubenswrapper[4667]: I0131 03:50:51.275750 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-kvgs8" Jan 31 03:50:51 crc kubenswrapper[4667]: I0131 03:50:51.617880 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 03:50:51 crc kubenswrapper[4667]: I0131 03:50:51.731331 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cc9c9a49-4902-42ce-8adf-9d24cad698e7-kubelet-dir\") pod \"cc9c9a49-4902-42ce-8adf-9d24cad698e7\" (UID: \"cc9c9a49-4902-42ce-8adf-9d24cad698e7\") " Jan 31 03:50:51 crc kubenswrapper[4667]: I0131 03:50:51.731431 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cc9c9a49-4902-42ce-8adf-9d24cad698e7-kube-api-access\") pod \"cc9c9a49-4902-42ce-8adf-9d24cad698e7\" (UID: \"cc9c9a49-4902-42ce-8adf-9d24cad698e7\") " Jan 31 03:50:51 crc kubenswrapper[4667]: I0131 03:50:51.731560 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc9c9a49-4902-42ce-8adf-9d24cad698e7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cc9c9a49-4902-42ce-8adf-9d24cad698e7" (UID: "cc9c9a49-4902-42ce-8adf-9d24cad698e7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:50:51 crc kubenswrapper[4667]: I0131 03:50:51.731696 4667 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cc9c9a49-4902-42ce-8adf-9d24cad698e7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 03:50:51 crc kubenswrapper[4667]: I0131 03:50:51.741964 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc9c9a49-4902-42ce-8adf-9d24cad698e7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cc9c9a49-4902-42ce-8adf-9d24cad698e7" (UID: "cc9c9a49-4902-42ce-8adf-9d24cad698e7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:50:51 crc kubenswrapper[4667]: I0131 03:50:51.832575 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cc9c9a49-4902-42ce-8adf-9d24cad698e7-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 03:50:52 crc kubenswrapper[4667]: I0131 03:50:52.101250 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cc9c9a49-4902-42ce-8adf-9d24cad698e7","Type":"ContainerDied","Data":"7b147c2c68ab1a66955b5f82f1fca3fd934a738caabb957a974c9d8476e58c75"} Jan 31 03:50:52 crc kubenswrapper[4667]: I0131 03:50:52.101678 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b147c2c68ab1a66955b5f82f1fca3fd934a738caabb957a974c9d8476e58c75" Jan 31 03:50:52 crc kubenswrapper[4667]: I0131 03:50:52.101531 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 03:50:54 crc kubenswrapper[4667]: I0131 03:50:54.330457 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 03:50:57 crc kubenswrapper[4667]: I0131 03:50:57.267764 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-5zj2q" Jan 31 03:50:58 crc kubenswrapper[4667]: I0131 03:50:58.487592 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:50:59 crc kubenswrapper[4667]: I0131 03:50:59.100153 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-wjsth" Jan 31 03:50:59 crc kubenswrapper[4667]: I0131 03:50:59.107795 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-wjsth" Jan 31 03:51:09 crc kubenswrapper[4667]: I0131 03:51:09.306736 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vsb97" Jan 31 03:51:15 crc kubenswrapper[4667]: I0131 03:51:15.705140 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 03:51:15 crc kubenswrapper[4667]: I0131 03:51:15.706152 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 03:51:17 crc kubenswrapper[4667]: E0131 03:51:17.126861 4667 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 31 03:51:17 crc kubenswrapper[4667]: E0131 03:51:17.127293 4667 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nk2j7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-qwhn5_openshift-marketplace(2afc75c6-1f04-4acb-b958-4159c2764e5e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 03:51:17 crc kubenswrapper[4667]: E0131 03:51:17.128976 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-qwhn5" podUID="2afc75c6-1f04-4acb-b958-4159c2764e5e" Jan 31 03:51:21 crc kubenswrapper[4667]: I0131 03:51:21.076545 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 31 03:51:21 crc kubenswrapper[4667]: E0131 03:51:21.077400 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc9c9a49-4902-42ce-8adf-9d24cad698e7" containerName="pruner" Jan 31 03:51:21 crc kubenswrapper[4667]: I0131 03:51:21.077412 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc9c9a49-4902-42ce-8adf-9d24cad698e7" containerName="pruner" Jan 31 03:51:21 crc kubenswrapper[4667]: I0131 03:51:21.077521 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc9c9a49-4902-42ce-8adf-9d24cad698e7" containerName="pruner" Jan 31 03:51:21 crc kubenswrapper[4667]: I0131 03:51:21.077887 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 03:51:21 crc kubenswrapper[4667]: I0131 03:51:21.084652 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 31 03:51:21 crc kubenswrapper[4667]: I0131 03:51:21.084704 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 31 03:51:21 crc kubenswrapper[4667]: I0131 03:51:21.086667 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 31 03:51:21 crc kubenswrapper[4667]: I0131 03:51:21.153089 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/537c03c8-f93a-41f7-a072-ce6a5485e72c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"537c03c8-f93a-41f7-a072-ce6a5485e72c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 03:51:21 crc kubenswrapper[4667]: I0131 03:51:21.153186 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/537c03c8-f93a-41f7-a072-ce6a5485e72c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"537c03c8-f93a-41f7-a072-ce6a5485e72c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 03:51:21 crc kubenswrapper[4667]: I0131 03:51:21.255343 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/537c03c8-f93a-41f7-a072-ce6a5485e72c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"537c03c8-f93a-41f7-a072-ce6a5485e72c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 03:51:21 crc kubenswrapper[4667]: I0131 03:51:21.255437 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/537c03c8-f93a-41f7-a072-ce6a5485e72c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"537c03c8-f93a-41f7-a072-ce6a5485e72c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 03:51:21 crc kubenswrapper[4667]: I0131 03:51:21.255560 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/537c03c8-f93a-41f7-a072-ce6a5485e72c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"537c03c8-f93a-41f7-a072-ce6a5485e72c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 03:51:21 crc kubenswrapper[4667]: I0131 03:51:21.275239 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/537c03c8-f93a-41f7-a072-ce6a5485e72c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"537c03c8-f93a-41f7-a072-ce6a5485e72c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 03:51:21 crc kubenswrapper[4667]: I0131 03:51:21.410808 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 03:51:21 crc kubenswrapper[4667]: E0131 03:51:21.476505 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-qwhn5" podUID="2afc75c6-1f04-4acb-b958-4159c2764e5e" Jan 31 03:51:21 crc kubenswrapper[4667]: E0131 03:51:21.555574 4667 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 31 03:51:21 crc kubenswrapper[4667]: E0131 03:51:21.555763 4667 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jmrmj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-fz7n8_openshift-marketplace(b6b3a151-c2e9-4461-92c3-b7752926f08c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 03:51:21 crc kubenswrapper[4667]: E0131 03:51:21.560722 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-fz7n8" podUID="b6b3a151-c2e9-4461-92c3-b7752926f08c" Jan 31 03:51:23 crc kubenswrapper[4667]: E0131 03:51:23.140713 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-fz7n8" podUID="b6b3a151-c2e9-4461-92c3-b7752926f08c" Jan 31 03:51:23 crc kubenswrapper[4667]: E0131 03:51:23.213993 4667 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 31 03:51:23 crc kubenswrapper[4667]: E0131 03:51:23.214231 4667 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vvkr9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-btrmv_openshift-marketplace(2720ee62-278e-4d32-924f-aa7401d1e7cb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 03:51:23 crc kubenswrapper[4667]: E0131 03:51:23.215487 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-btrmv" podUID="2720ee62-278e-4d32-924f-aa7401d1e7cb" Jan 31 03:51:23 crc kubenswrapper[4667]: E0131 03:51:23.248504 4667 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 31 03:51:23 crc kubenswrapper[4667]: E0131 03:51:23.249317 4667 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dcx9j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-4lz7l_openshift-marketplace(7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 03:51:23 crc kubenswrapper[4667]: E0131 03:51:23.250755 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-4lz7l" podUID="7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6" Jan 31 03:51:23 crc kubenswrapper[4667]: E0131 03:51:23.263650 4667 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 31 03:51:23 crc kubenswrapper[4667]: E0131 03:51:23.263780 4667 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8x69x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-gzpml_openshift-marketplace(6d010741-ba9c-43b5-9dc3-87cb17d353d2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 03:51:23 crc kubenswrapper[4667]: E0131 03:51:23.265136 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-gzpml" podUID="6d010741-ba9c-43b5-9dc3-87cb17d353d2" Jan 31 03:51:23 crc kubenswrapper[4667]: E0131 03:51:23.285764 4667 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 31 03:51:23 crc kubenswrapper[4667]: E0131 03:51:23.285924 4667 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-99562,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-5pmpf_openshift-marketplace(fd00c210-5665-4575-a5a3-413a89f5c03a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 03:51:23 crc kubenswrapper[4667]: E0131 03:51:23.286991 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-5pmpf" podUID="fd00c210-5665-4575-a5a3-413a89f5c03a" Jan 31 03:51:24 crc kubenswrapper[4667]: E0131 03:51:24.879737 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-btrmv" podUID="2720ee62-278e-4d32-924f-aa7401d1e7cb" Jan 31 03:51:24 crc kubenswrapper[4667]: E0131 03:51:24.879767 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-gzpml" podUID="6d010741-ba9c-43b5-9dc3-87cb17d353d2" Jan 31 03:51:24 crc kubenswrapper[4667]: E0131 03:51:24.879919 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-4lz7l" podUID="7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6" Jan 31 03:51:24 crc kubenswrapper[4667]: E0131 03:51:24.965351 4667 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 31 03:51:24 crc kubenswrapper[4667]: E0131 03:51:24.965506 4667 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zdvrd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-52dpw_openshift-marketplace(8ac842ab-5970-4e4c-81ee-2f8b40d61416): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 03:51:24 crc kubenswrapper[4667]: E0131 03:51:24.970730 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-52dpw" podUID="8ac842ab-5970-4e4c-81ee-2f8b40d61416" Jan 31 03:51:25 crc kubenswrapper[4667]: E0131 03:51:25.012141 4667 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 31 03:51:25 crc kubenswrapper[4667]: E0131 03:51:25.012643 4667 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xzjr5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-lh6xn_openshift-marketplace(6fc82b44-ef8d-4f7c-a022-fcbed68b1fab): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 03:51:25 crc kubenswrapper[4667]: E0131 03:51:25.014197 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-lh6xn" podUID="6fc82b44-ef8d-4f7c-a022-fcbed68b1fab" Jan 31 03:51:25 crc kubenswrapper[4667]: I0131 03:51:25.291215 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 31 03:51:25 crc kubenswrapper[4667]: W0131 03:51:25.297790 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod537c03c8_f93a_41f7_a072_ce6a5485e72c.slice/crio-3a10e4db12384b6273256a44104b57dfea08f44501cd0c1c9a2cd5d9f3733888 WatchSource:0}: Error finding container 3a10e4db12384b6273256a44104b57dfea08f44501cd0c1c9a2cd5d9f3733888: Status 404 returned error can't find the container with id 3a10e4db12384b6273256a44104b57dfea08f44501cd0c1c9a2cd5d9f3733888 Jan 31 03:51:25 crc kubenswrapper[4667]: I0131 03:51:25.322314 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"537c03c8-f93a-41f7-a072-ce6a5485e72c","Type":"ContainerStarted","Data":"3a10e4db12384b6273256a44104b57dfea08f44501cd0c1c9a2cd5d9f3733888"} Jan 31 03:51:25 crc kubenswrapper[4667]: I0131 03:51:25.873614 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 31 03:51:25 crc kubenswrapper[4667]: I0131 03:51:25.874594 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 03:51:25 crc kubenswrapper[4667]: I0131 03:51:25.889201 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 31 03:51:25 crc kubenswrapper[4667]: I0131 03:51:25.925198 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b9115bbd-7154-4f77-8d1d-68b1e78a478a-kube-api-access\") pod \"installer-9-crc\" (UID: \"b9115bbd-7154-4f77-8d1d-68b1e78a478a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 03:51:25 crc kubenswrapper[4667]: I0131 03:51:25.925275 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b9115bbd-7154-4f77-8d1d-68b1e78a478a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b9115bbd-7154-4f77-8d1d-68b1e78a478a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 03:51:25 crc kubenswrapper[4667]: I0131 03:51:25.925336 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b9115bbd-7154-4f77-8d1d-68b1e78a478a-var-lock\") pod \"installer-9-crc\" (UID: \"b9115bbd-7154-4f77-8d1d-68b1e78a478a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 03:51:26 crc kubenswrapper[4667]: I0131 03:51:26.026147 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b9115bbd-7154-4f77-8d1d-68b1e78a478a-kube-api-access\") pod \"installer-9-crc\" (UID: \"b9115bbd-7154-4f77-8d1d-68b1e78a478a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 03:51:26 crc kubenswrapper[4667]: I0131 03:51:26.026605 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b9115bbd-7154-4f77-8d1d-68b1e78a478a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b9115bbd-7154-4f77-8d1d-68b1e78a478a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 03:51:26 crc kubenswrapper[4667]: I0131 03:51:26.026679 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b9115bbd-7154-4f77-8d1d-68b1e78a478a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b9115bbd-7154-4f77-8d1d-68b1e78a478a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 03:51:26 crc kubenswrapper[4667]: I0131 03:51:26.026719 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b9115bbd-7154-4f77-8d1d-68b1e78a478a-var-lock\") pod \"installer-9-crc\" (UID: \"b9115bbd-7154-4f77-8d1d-68b1e78a478a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 03:51:26 crc kubenswrapper[4667]: I0131 03:51:26.026788 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b9115bbd-7154-4f77-8d1d-68b1e78a478a-var-lock\") pod \"installer-9-crc\" (UID: \"b9115bbd-7154-4f77-8d1d-68b1e78a478a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 03:51:26 crc kubenswrapper[4667]: I0131 03:51:26.045453 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b9115bbd-7154-4f77-8d1d-68b1e78a478a-kube-api-access\") pod \"installer-9-crc\" (UID: \"b9115bbd-7154-4f77-8d1d-68b1e78a478a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 03:51:26 crc kubenswrapper[4667]: I0131 03:51:26.233302 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 03:51:26 crc kubenswrapper[4667]: I0131 03:51:26.328268 4667 generic.go:334] "Generic (PLEG): container finished" podID="537c03c8-f93a-41f7-a072-ce6a5485e72c" containerID="2b28f05a9565649349a4e446fd863a6baccb1df35753b0e52b78bc065063d88a" exitCode=0 Jan 31 03:51:26 crc kubenswrapper[4667]: I0131 03:51:26.328555 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"537c03c8-f93a-41f7-a072-ce6a5485e72c","Type":"ContainerDied","Data":"2b28f05a9565649349a4e446fd863a6baccb1df35753b0e52b78bc065063d88a"} Jan 31 03:51:26 crc kubenswrapper[4667]: I0131 03:51:26.617190 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 31 03:51:26 crc kubenswrapper[4667]: W0131 03:51:26.626236 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podb9115bbd_7154_4f77_8d1d_68b1e78a478a.slice/crio-d1d4914335748a23174fdc05a0dd600bc8dcdef4fdb0b0d895ae8a437efea966 WatchSource:0}: Error finding container d1d4914335748a23174fdc05a0dd600bc8dcdef4fdb0b0d895ae8a437efea966: Status 404 returned error can't find the container with id d1d4914335748a23174fdc05a0dd600bc8dcdef4fdb0b0d895ae8a437efea966 Jan 31 03:51:27 crc kubenswrapper[4667]: I0131 03:51:27.334053 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b9115bbd-7154-4f77-8d1d-68b1e78a478a","Type":"ContainerStarted","Data":"84f2da8dbcfdcf5db87f14065eef1cf64cb6d2e34db17fe952d7085f69a5322e"} Jan 31 03:51:27 crc kubenswrapper[4667]: I0131 03:51:27.334356 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b9115bbd-7154-4f77-8d1d-68b1e78a478a","Type":"ContainerStarted","Data":"d1d4914335748a23174fdc05a0dd600bc8dcdef4fdb0b0d895ae8a437efea966"} Jan 31 03:51:27 crc kubenswrapper[4667]: I0131 03:51:27.348982 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.348959253 podStartE2EDuration="2.348959253s" podCreationTimestamp="2026-01-31 03:51:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:51:27.348202731 +0000 UTC m=+210.864538030" watchObservedRunningTime="2026-01-31 03:51:27.348959253 +0000 UTC m=+210.865294572" Jan 31 03:51:27 crc kubenswrapper[4667]: I0131 03:51:27.592033 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 03:51:27 crc kubenswrapper[4667]: I0131 03:51:27.647735 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/537c03c8-f93a-41f7-a072-ce6a5485e72c-kube-api-access\") pod \"537c03c8-f93a-41f7-a072-ce6a5485e72c\" (UID: \"537c03c8-f93a-41f7-a072-ce6a5485e72c\") " Jan 31 03:51:27 crc kubenswrapper[4667]: I0131 03:51:27.648144 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/537c03c8-f93a-41f7-a072-ce6a5485e72c-kubelet-dir\") pod \"537c03c8-f93a-41f7-a072-ce6a5485e72c\" (UID: \"537c03c8-f93a-41f7-a072-ce6a5485e72c\") " Jan 31 03:51:27 crc kubenswrapper[4667]: I0131 03:51:27.648213 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/537c03c8-f93a-41f7-a072-ce6a5485e72c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "537c03c8-f93a-41f7-a072-ce6a5485e72c" (UID: "537c03c8-f93a-41f7-a072-ce6a5485e72c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:51:27 crc kubenswrapper[4667]: I0131 03:51:27.648648 4667 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/537c03c8-f93a-41f7-a072-ce6a5485e72c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 03:51:27 crc kubenswrapper[4667]: I0131 03:51:27.653184 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/537c03c8-f93a-41f7-a072-ce6a5485e72c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "537c03c8-f93a-41f7-a072-ce6a5485e72c" (UID: "537c03c8-f93a-41f7-a072-ce6a5485e72c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:51:27 crc kubenswrapper[4667]: I0131 03:51:27.750292 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/537c03c8-f93a-41f7-a072-ce6a5485e72c-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 03:51:28 crc kubenswrapper[4667]: I0131 03:51:28.340227 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 03:51:28 crc kubenswrapper[4667]: I0131 03:51:28.342954 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"537c03c8-f93a-41f7-a072-ce6a5485e72c","Type":"ContainerDied","Data":"3a10e4db12384b6273256a44104b57dfea08f44501cd0c1c9a2cd5d9f3733888"} Jan 31 03:51:28 crc kubenswrapper[4667]: I0131 03:51:28.343076 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a10e4db12384b6273256a44104b57dfea08f44501cd0c1c9a2cd5d9f3733888" Jan 31 03:51:34 crc kubenswrapper[4667]: I0131 03:51:34.368917 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwhn5" event={"ID":"2afc75c6-1f04-4acb-b958-4159c2764e5e","Type":"ContainerStarted","Data":"bd6c84b09ef56c03042fae916297ec8832a3b24d70c2e9f7c1646c93b483791f"} Jan 31 03:51:35 crc kubenswrapper[4667]: I0131 03:51:35.382155 4667 generic.go:334] "Generic (PLEG): container finished" podID="2afc75c6-1f04-4acb-b958-4159c2764e5e" containerID="bd6c84b09ef56c03042fae916297ec8832a3b24d70c2e9f7c1646c93b483791f" exitCode=0 Jan 31 03:51:35 crc kubenswrapper[4667]: I0131 03:51:35.382210 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwhn5" event={"ID":"2afc75c6-1f04-4acb-b958-4159c2764e5e","Type":"ContainerDied","Data":"bd6c84b09ef56c03042fae916297ec8832a3b24d70c2e9f7c1646c93b483791f"} Jan 31 03:51:36 crc kubenswrapper[4667]: I0131 03:51:36.392769 4667 generic.go:334] "Generic (PLEG): container finished" podID="2720ee62-278e-4d32-924f-aa7401d1e7cb" containerID="704a22d4c0eb7714ab53f00a5c160af18b1cb7fc69774460a9e2ef365243dbfa" exitCode=0 Jan 31 03:51:36 crc kubenswrapper[4667]: I0131 03:51:36.393271 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-btrmv" event={"ID":"2720ee62-278e-4d32-924f-aa7401d1e7cb","Type":"ContainerDied","Data":"704a22d4c0eb7714ab53f00a5c160af18b1cb7fc69774460a9e2ef365243dbfa"} Jan 31 03:51:36 crc kubenswrapper[4667]: I0131 03:51:36.398947 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwhn5" event={"ID":"2afc75c6-1f04-4acb-b958-4159c2764e5e","Type":"ContainerStarted","Data":"c14f0c39563f6d54c45637235431b5ce879fd1b858c78a7133a6d38a0b3d18ac"} Jan 31 03:51:36 crc kubenswrapper[4667]: I0131 03:51:36.434690 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qwhn5" podStartSLOduration=3.221075798 podStartE2EDuration="57.434675443s" podCreationTimestamp="2026-01-31 03:50:39 +0000 UTC" firstStartedPulling="2026-01-31 03:50:41.626338122 +0000 UTC m=+165.142673421" lastFinishedPulling="2026-01-31 03:51:35.839937767 +0000 UTC m=+219.356273066" observedRunningTime="2026-01-31 03:51:36.434511028 +0000 UTC m=+219.950846327" watchObservedRunningTime="2026-01-31 03:51:36.434675443 +0000 UTC m=+219.951010742" Jan 31 03:51:36 crc kubenswrapper[4667]: I0131 03:51:36.692764 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dmtcm"] Jan 31 03:51:37 crc kubenswrapper[4667]: I0131 03:51:37.406117 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-btrmv" event={"ID":"2720ee62-278e-4d32-924f-aa7401d1e7cb","Type":"ContainerStarted","Data":"a9b73582f7941d912f8d6b31cfc6afd6daab615035e5c7cafd654529229119ac"} Jan 31 03:51:37 crc kubenswrapper[4667]: I0131 03:51:37.429432 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-btrmv" podStartSLOduration=4.234269054 podStartE2EDuration="1m0.429413455s" podCreationTimestamp="2026-01-31 03:50:37 +0000 UTC" firstStartedPulling="2026-01-31 03:50:40.600386876 +0000 UTC m=+164.116722165" lastFinishedPulling="2026-01-31 03:51:36.795531277 +0000 UTC m=+220.311866566" observedRunningTime="2026-01-31 03:51:37.426672287 +0000 UTC m=+220.943007596" watchObservedRunningTime="2026-01-31 03:51:37.429413455 +0000 UTC m=+220.945748754" Jan 31 03:51:37 crc kubenswrapper[4667]: I0131 03:51:37.639043 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-btrmv" Jan 31 03:51:37 crc kubenswrapper[4667]: I0131 03:51:37.639109 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-btrmv" Jan 31 03:51:38 crc kubenswrapper[4667]: I0131 03:51:38.780194 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-btrmv" podUID="2720ee62-278e-4d32-924f-aa7401d1e7cb" containerName="registry-server" probeResult="failure" output=< Jan 31 03:51:38 crc kubenswrapper[4667]: timeout: failed to connect service ":50051" within 1s Jan 31 03:51:38 crc kubenswrapper[4667]: > Jan 31 03:51:39 crc kubenswrapper[4667]: I0131 03:51:39.418184 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lh6xn" event={"ID":"6fc82b44-ef8d-4f7c-a022-fcbed68b1fab","Type":"ContainerStarted","Data":"e9c48e757c90bab7963af4b346bf0660d686bad06c1194af33e61d7af349f3f0"} Jan 31 03:51:39 crc kubenswrapper[4667]: I0131 03:51:39.420063 4667 generic.go:334] "Generic (PLEG): container finished" podID="6d010741-ba9c-43b5-9dc3-87cb17d353d2" containerID="6b0a465ff042689a202f61d53731ecac8336dee14fb3097603470023e1a65e7f" exitCode=0 Jan 31 03:51:39 crc kubenswrapper[4667]: I0131 03:51:39.420132 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gzpml" event={"ID":"6d010741-ba9c-43b5-9dc3-87cb17d353d2","Type":"ContainerDied","Data":"6b0a465ff042689a202f61d53731ecac8336dee14fb3097603470023e1a65e7f"} Jan 31 03:51:39 crc kubenswrapper[4667]: I0131 03:51:39.422010 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pmpf" event={"ID":"fd00c210-5665-4575-a5a3-413a89f5c03a","Type":"ContainerStarted","Data":"0c4ffbc5a6a136c4802a1c7af9791ad9ab1b5c1d368e4af6c98a4cb0fc93a752"} Jan 31 03:51:39 crc kubenswrapper[4667]: I0131 03:51:39.860638 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qwhn5" Jan 31 03:51:39 crc kubenswrapper[4667]: I0131 03:51:39.860686 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qwhn5" Jan 31 03:51:39 crc kubenswrapper[4667]: I0131 03:51:39.977486 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qwhn5" Jan 31 03:51:40 crc kubenswrapper[4667]: I0131 03:51:40.429731 4667 generic.go:334] "Generic (PLEG): container finished" podID="fd00c210-5665-4575-a5a3-413a89f5c03a" containerID="0c4ffbc5a6a136c4802a1c7af9791ad9ab1b5c1d368e4af6c98a4cb0fc93a752" exitCode=0 Jan 31 03:51:40 crc kubenswrapper[4667]: I0131 03:51:40.429819 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pmpf" event={"ID":"fd00c210-5665-4575-a5a3-413a89f5c03a","Type":"ContainerDied","Data":"0c4ffbc5a6a136c4802a1c7af9791ad9ab1b5c1d368e4af6c98a4cb0fc93a752"} Jan 31 03:51:40 crc kubenswrapper[4667]: I0131 03:51:40.433005 4667 generic.go:334] "Generic (PLEG): container finished" podID="6fc82b44-ef8d-4f7c-a022-fcbed68b1fab" containerID="e9c48e757c90bab7963af4b346bf0660d686bad06c1194af33e61d7af349f3f0" exitCode=0 Jan 31 03:51:40 crc kubenswrapper[4667]: I0131 03:51:40.433123 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lh6xn" event={"ID":"6fc82b44-ef8d-4f7c-a022-fcbed68b1fab","Type":"ContainerDied","Data":"e9c48e757c90bab7963af4b346bf0660d686bad06c1194af33e61d7af349f3f0"} Jan 31 03:51:40 crc kubenswrapper[4667]: I0131 03:51:40.480446 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qwhn5" Jan 31 03:51:41 crc kubenswrapper[4667]: I0131 03:51:41.798454 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwhn5"] Jan 31 03:51:42 crc kubenswrapper[4667]: I0131 03:51:42.445659 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qwhn5" podUID="2afc75c6-1f04-4acb-b958-4159c2764e5e" containerName="registry-server" containerID="cri-o://c14f0c39563f6d54c45637235431b5ce879fd1b858c78a7133a6d38a0b3d18ac" gracePeriod=2 Jan 31 03:51:44 crc kubenswrapper[4667]: I0131 03:51:44.457378 4667 generic.go:334] "Generic (PLEG): container finished" podID="2afc75c6-1f04-4acb-b958-4159c2764e5e" containerID="c14f0c39563f6d54c45637235431b5ce879fd1b858c78a7133a6d38a0b3d18ac" exitCode=0 Jan 31 03:51:44 crc kubenswrapper[4667]: I0131 03:51:44.457424 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwhn5" event={"ID":"2afc75c6-1f04-4acb-b958-4159c2764e5e","Type":"ContainerDied","Data":"c14f0c39563f6d54c45637235431b5ce879fd1b858c78a7133a6d38a0b3d18ac"} Jan 31 03:51:44 crc kubenswrapper[4667]: I0131 03:51:44.779660 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qwhn5" Jan 31 03:51:44 crc kubenswrapper[4667]: I0131 03:51:44.799751 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2afc75c6-1f04-4acb-b958-4159c2764e5e-catalog-content\") pod \"2afc75c6-1f04-4acb-b958-4159c2764e5e\" (UID: \"2afc75c6-1f04-4acb-b958-4159c2764e5e\") " Jan 31 03:51:44 crc kubenswrapper[4667]: I0131 03:51:44.799832 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2afc75c6-1f04-4acb-b958-4159c2764e5e-utilities\") pod \"2afc75c6-1f04-4acb-b958-4159c2764e5e\" (UID: \"2afc75c6-1f04-4acb-b958-4159c2764e5e\") " Jan 31 03:51:44 crc kubenswrapper[4667]: I0131 03:51:44.799882 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nk2j7\" (UniqueName: \"kubernetes.io/projected/2afc75c6-1f04-4acb-b958-4159c2764e5e-kube-api-access-nk2j7\") pod \"2afc75c6-1f04-4acb-b958-4159c2764e5e\" (UID: \"2afc75c6-1f04-4acb-b958-4159c2764e5e\") " Jan 31 03:51:44 crc kubenswrapper[4667]: I0131 03:51:44.802760 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2afc75c6-1f04-4acb-b958-4159c2764e5e-utilities" (OuterVolumeSpecName: "utilities") pod "2afc75c6-1f04-4acb-b958-4159c2764e5e" (UID: "2afc75c6-1f04-4acb-b958-4159c2764e5e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:51:44 crc kubenswrapper[4667]: I0131 03:51:44.806977 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2afc75c6-1f04-4acb-b958-4159c2764e5e-kube-api-access-nk2j7" (OuterVolumeSpecName: "kube-api-access-nk2j7") pod "2afc75c6-1f04-4acb-b958-4159c2764e5e" (UID: "2afc75c6-1f04-4acb-b958-4159c2764e5e"). InnerVolumeSpecName "kube-api-access-nk2j7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:51:44 crc kubenswrapper[4667]: I0131 03:51:44.827716 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2afc75c6-1f04-4acb-b958-4159c2764e5e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2afc75c6-1f04-4acb-b958-4159c2764e5e" (UID: "2afc75c6-1f04-4acb-b958-4159c2764e5e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:51:44 crc kubenswrapper[4667]: I0131 03:51:44.901050 4667 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2afc75c6-1f04-4acb-b958-4159c2764e5e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 03:51:44 crc kubenswrapper[4667]: I0131 03:51:44.901089 4667 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2afc75c6-1f04-4acb-b958-4159c2764e5e-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 03:51:44 crc kubenswrapper[4667]: I0131 03:51:44.901102 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nk2j7\" (UniqueName: \"kubernetes.io/projected/2afc75c6-1f04-4acb-b958-4159c2764e5e-kube-api-access-nk2j7\") on node \"crc\" DevicePath \"\"" Jan 31 03:51:45 crc kubenswrapper[4667]: I0131 03:51:45.468399 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fz7n8" event={"ID":"b6b3a151-c2e9-4461-92c3-b7752926f08c","Type":"ContainerStarted","Data":"05af988befd32ddd2d7e605a49a92b22e48eb87342711e074dd4a3092dfd5470"} Jan 31 03:51:45 crc kubenswrapper[4667]: I0131 03:51:45.472812 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gzpml" event={"ID":"6d010741-ba9c-43b5-9dc3-87cb17d353d2","Type":"ContainerStarted","Data":"38f9f897055c327a1fd1b71b991eeb503e77ff9d41adadf4ca7ed4d646a588fa"} Jan 31 03:51:45 crc kubenswrapper[4667]: I0131 03:51:45.479659 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwhn5" event={"ID":"2afc75c6-1f04-4acb-b958-4159c2764e5e","Type":"ContainerDied","Data":"d5763716cac981a726cbf98d3391701b2c4c30ce50323806b5e7ce2fdf2ce096"} Jan 31 03:51:45 crc kubenswrapper[4667]: I0131 03:51:45.479722 4667 scope.go:117] "RemoveContainer" containerID="c14f0c39563f6d54c45637235431b5ce879fd1b858c78a7133a6d38a0b3d18ac" Jan 31 03:51:45 crc kubenswrapper[4667]: I0131 03:51:45.479943 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qwhn5" Jan 31 03:51:45 crc kubenswrapper[4667]: I0131 03:51:45.511668 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwhn5"] Jan 31 03:51:45 crc kubenswrapper[4667]: I0131 03:51:45.517519 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwhn5"] Jan 31 03:51:45 crc kubenswrapper[4667]: I0131 03:51:45.550795 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gzpml" podStartSLOduration=4.286751744 podStartE2EDuration="1m6.550730728s" podCreationTimestamp="2026-01-31 03:50:39 +0000 UTC" firstStartedPulling="2026-01-31 03:50:41.622715488 +0000 UTC m=+165.139050787" lastFinishedPulling="2026-01-31 03:51:43.886694472 +0000 UTC m=+227.403029771" observedRunningTime="2026-01-31 03:51:45.540339376 +0000 UTC m=+229.056674675" watchObservedRunningTime="2026-01-31 03:51:45.550730728 +0000 UTC m=+229.067066037" Jan 31 03:51:45 crc kubenswrapper[4667]: I0131 03:51:45.596778 4667 scope.go:117] "RemoveContainer" containerID="bd6c84b09ef56c03042fae916297ec8832a3b24d70c2e9f7c1646c93b483791f" Jan 31 03:51:45 crc kubenswrapper[4667]: I0131 03:51:45.704417 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 03:51:45 crc kubenswrapper[4667]: I0131 03:51:45.704532 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 03:51:45 crc kubenswrapper[4667]: I0131 03:51:45.704624 4667 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" Jan 31 03:51:45 crc kubenswrapper[4667]: I0131 03:51:45.705503 4667 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"298f76d02f4ede118feca9fc2d4c9c073e2331174dcf673208ed96478b74232d"} pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 03:51:45 crc kubenswrapper[4667]: I0131 03:51:45.705661 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" containerID="cri-o://298f76d02f4ede118feca9fc2d4c9c073e2331174dcf673208ed96478b74232d" gracePeriod=600 Jan 31 03:51:46 crc kubenswrapper[4667]: I0131 03:51:46.494317 4667 generic.go:334] "Generic (PLEG): container finished" podID="b6b3a151-c2e9-4461-92c3-b7752926f08c" containerID="05af988befd32ddd2d7e605a49a92b22e48eb87342711e074dd4a3092dfd5470" exitCode=0 Jan 31 03:51:46 crc kubenswrapper[4667]: I0131 03:51:46.494383 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fz7n8" event={"ID":"b6b3a151-c2e9-4461-92c3-b7752926f08c","Type":"ContainerDied","Data":"05af988befd32ddd2d7e605a49a92b22e48eb87342711e074dd4a3092dfd5470"} Jan 31 03:51:46 crc kubenswrapper[4667]: I0131 03:51:46.929950 4667 scope.go:117] "RemoveContainer" containerID="eb472256f9efbdf663c1aed2cf448b3ef9a2e3b2d1c0cefebb56ee7289ed6713" Jan 31 03:51:47 crc kubenswrapper[4667]: I0131 03:51:47.297436 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2afc75c6-1f04-4acb-b958-4159c2764e5e" path="/var/lib/kubelet/pods/2afc75c6-1f04-4acb-b958-4159c2764e5e/volumes" Jan 31 03:51:47 crc kubenswrapper[4667]: I0131 03:51:47.500805 4667 generic.go:334] "Generic (PLEG): container finished" podID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerID="298f76d02f4ede118feca9fc2d4c9c073e2331174dcf673208ed96478b74232d" exitCode=0 Jan 31 03:51:47 crc kubenswrapper[4667]: I0131 03:51:47.500900 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" event={"ID":"b103bbd2-fb5d-4b2a-8b01-c32f699757df","Type":"ContainerDied","Data":"298f76d02f4ede118feca9fc2d4c9c073e2331174dcf673208ed96478b74232d"} Jan 31 03:51:47 crc kubenswrapper[4667]: I0131 03:51:47.681674 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-btrmv" Jan 31 03:51:47 crc kubenswrapper[4667]: I0131 03:51:47.728574 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-btrmv" Jan 31 03:51:49 crc kubenswrapper[4667]: I0131 03:51:49.458193 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gzpml" Jan 31 03:51:49 crc kubenswrapper[4667]: I0131 03:51:49.461558 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gzpml" Jan 31 03:51:49 crc kubenswrapper[4667]: I0131 03:51:49.517442 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" event={"ID":"b103bbd2-fb5d-4b2a-8b01-c32f699757df","Type":"ContainerStarted","Data":"7b687bdeec0b8354da9e648ddc07789b2d329a717764df615911c6a3b3a6768e"} Jan 31 03:51:49 crc kubenswrapper[4667]: I0131 03:51:49.522342 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gzpml" Jan 31 03:51:50 crc kubenswrapper[4667]: I0131 03:51:50.003020 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-btrmv"] Jan 31 03:51:50 crc kubenswrapper[4667]: I0131 03:51:50.003259 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-btrmv" podUID="2720ee62-278e-4d32-924f-aa7401d1e7cb" containerName="registry-server" containerID="cri-o://a9b73582f7941d912f8d6b31cfc6afd6daab615035e5c7cafd654529229119ac" gracePeriod=2 Jan 31 03:51:50 crc kubenswrapper[4667]: I0131 03:51:50.525421 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-52dpw" event={"ID":"8ac842ab-5970-4e4c-81ee-2f8b40d61416","Type":"ContainerStarted","Data":"d6dab8aece37cea18f7e20c3cbcdd156c55c09eb4d0e1973ad5b4d56ec00cc05"} Jan 31 03:51:50 crc kubenswrapper[4667]: I0131 03:51:50.538460 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lh6xn" event={"ID":"6fc82b44-ef8d-4f7c-a022-fcbed68b1fab","Type":"ContainerStarted","Data":"1f0936dcdce432c420bef9d12c954ae33873440cfa4dbcf167acc707c076a5c4"} Jan 31 03:51:50 crc kubenswrapper[4667]: I0131 03:51:50.540601 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4lz7l" event={"ID":"7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6","Type":"ContainerStarted","Data":"9386d92fbf07db9f75520236fbfd77c24238c6123c6a8945dade4a4478cd2dd0"} Jan 31 03:51:50 crc kubenswrapper[4667]: I0131 03:51:50.554014 4667 generic.go:334] "Generic (PLEG): container finished" podID="2720ee62-278e-4d32-924f-aa7401d1e7cb" containerID="a9b73582f7941d912f8d6b31cfc6afd6daab615035e5c7cafd654529229119ac" exitCode=0 Jan 31 03:51:50 crc kubenswrapper[4667]: I0131 03:51:50.554110 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-btrmv" event={"ID":"2720ee62-278e-4d32-924f-aa7401d1e7cb","Type":"ContainerDied","Data":"a9b73582f7941d912f8d6b31cfc6afd6daab615035e5c7cafd654529229119ac"} Jan 31 03:51:50 crc kubenswrapper[4667]: I0131 03:51:50.564134 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pmpf" event={"ID":"fd00c210-5665-4575-a5a3-413a89f5c03a","Type":"ContainerStarted","Data":"a784507c4dc97618f1f424a9154b00db086d04778152f1fa6fcc795436273250"} Jan 31 03:51:50 crc kubenswrapper[4667]: I0131 03:51:50.616392 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lh6xn" podStartSLOduration=5.961541294 podStartE2EDuration="1m13.616368924s" podCreationTimestamp="2026-01-31 03:50:37 +0000 UTC" firstStartedPulling="2026-01-31 03:50:40.607270965 +0000 UTC m=+164.123606264" lastFinishedPulling="2026-01-31 03:51:48.262098595 +0000 UTC m=+231.778433894" observedRunningTime="2026-01-31 03:51:50.614052818 +0000 UTC m=+234.130388117" watchObservedRunningTime="2026-01-31 03:51:50.616368924 +0000 UTC m=+234.132704223" Jan 31 03:51:50 crc kubenswrapper[4667]: I0131 03:51:50.632930 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gzpml" Jan 31 03:51:50 crc kubenswrapper[4667]: I0131 03:51:50.642521 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5pmpf" podStartSLOduration=3.301033022 podStartE2EDuration="1m10.642489619s" podCreationTimestamp="2026-01-31 03:50:40 +0000 UTC" firstStartedPulling="2026-01-31 03:50:42.666784144 +0000 UTC m=+166.183119443" lastFinishedPulling="2026-01-31 03:51:50.008240741 +0000 UTC m=+233.524576040" observedRunningTime="2026-01-31 03:51:50.641695926 +0000 UTC m=+234.158031215" watchObservedRunningTime="2026-01-31 03:51:50.642489619 +0000 UTC m=+234.158824918" Jan 31 03:51:50 crc kubenswrapper[4667]: I0131 03:51:50.833875 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5pmpf" Jan 31 03:51:50 crc kubenswrapper[4667]: I0131 03:51:50.833924 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5pmpf" Jan 31 03:51:51 crc kubenswrapper[4667]: I0131 03:51:51.227784 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-btrmv" Jan 31 03:51:51 crc kubenswrapper[4667]: I0131 03:51:51.313778 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2720ee62-278e-4d32-924f-aa7401d1e7cb-catalog-content\") pod \"2720ee62-278e-4d32-924f-aa7401d1e7cb\" (UID: \"2720ee62-278e-4d32-924f-aa7401d1e7cb\") " Jan 31 03:51:51 crc kubenswrapper[4667]: I0131 03:51:51.313867 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvkr9\" (UniqueName: \"kubernetes.io/projected/2720ee62-278e-4d32-924f-aa7401d1e7cb-kube-api-access-vvkr9\") pod \"2720ee62-278e-4d32-924f-aa7401d1e7cb\" (UID: \"2720ee62-278e-4d32-924f-aa7401d1e7cb\") " Jan 31 03:51:51 crc kubenswrapper[4667]: I0131 03:51:51.313913 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2720ee62-278e-4d32-924f-aa7401d1e7cb-utilities\") pod \"2720ee62-278e-4d32-924f-aa7401d1e7cb\" (UID: \"2720ee62-278e-4d32-924f-aa7401d1e7cb\") " Jan 31 03:51:51 crc kubenswrapper[4667]: I0131 03:51:51.316109 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2720ee62-278e-4d32-924f-aa7401d1e7cb-utilities" (OuterVolumeSpecName: "utilities") pod "2720ee62-278e-4d32-924f-aa7401d1e7cb" (UID: "2720ee62-278e-4d32-924f-aa7401d1e7cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:51:51 crc kubenswrapper[4667]: I0131 03:51:51.332764 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2720ee62-278e-4d32-924f-aa7401d1e7cb-kube-api-access-vvkr9" (OuterVolumeSpecName: "kube-api-access-vvkr9") pod "2720ee62-278e-4d32-924f-aa7401d1e7cb" (UID: "2720ee62-278e-4d32-924f-aa7401d1e7cb"). InnerVolumeSpecName "kube-api-access-vvkr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:51:51 crc kubenswrapper[4667]: I0131 03:51:51.368579 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2720ee62-278e-4d32-924f-aa7401d1e7cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2720ee62-278e-4d32-924f-aa7401d1e7cb" (UID: "2720ee62-278e-4d32-924f-aa7401d1e7cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:51:51 crc kubenswrapper[4667]: I0131 03:51:51.415787 4667 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2720ee62-278e-4d32-924f-aa7401d1e7cb-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 03:51:51 crc kubenswrapper[4667]: I0131 03:51:51.415823 4667 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2720ee62-278e-4d32-924f-aa7401d1e7cb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 03:51:51 crc kubenswrapper[4667]: I0131 03:51:51.415835 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvkr9\" (UniqueName: \"kubernetes.io/projected/2720ee62-278e-4d32-924f-aa7401d1e7cb-kube-api-access-vvkr9\") on node \"crc\" DevicePath \"\"" Jan 31 03:51:51 crc kubenswrapper[4667]: I0131 03:51:51.571521 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fz7n8" event={"ID":"b6b3a151-c2e9-4461-92c3-b7752926f08c","Type":"ContainerStarted","Data":"54d0edca032e18cf999c736223dd9c70d30a429057f87ed564c7bf6678056ea6"} Jan 31 03:51:51 crc kubenswrapper[4667]: I0131 03:51:51.573410 4667 generic.go:334] "Generic (PLEG): container finished" podID="8ac842ab-5970-4e4c-81ee-2f8b40d61416" containerID="d6dab8aece37cea18f7e20c3cbcdd156c55c09eb4d0e1973ad5b4d56ec00cc05" exitCode=0 Jan 31 03:51:51 crc kubenswrapper[4667]: I0131 03:51:51.573445 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-52dpw" event={"ID":"8ac842ab-5970-4e4c-81ee-2f8b40d61416","Type":"ContainerDied","Data":"d6dab8aece37cea18f7e20c3cbcdd156c55c09eb4d0e1973ad5b4d56ec00cc05"} Jan 31 03:51:51 crc kubenswrapper[4667]: I0131 03:51:51.575097 4667 generic.go:334] "Generic (PLEG): container finished" podID="7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6" containerID="9386d92fbf07db9f75520236fbfd77c24238c6123c6a8945dade4a4478cd2dd0" exitCode=0 Jan 31 03:51:51 crc kubenswrapper[4667]: I0131 03:51:51.575180 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4lz7l" event={"ID":"7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6","Type":"ContainerDied","Data":"9386d92fbf07db9f75520236fbfd77c24238c6123c6a8945dade4a4478cd2dd0"} Jan 31 03:51:51 crc kubenswrapper[4667]: I0131 03:51:51.579346 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-btrmv" event={"ID":"2720ee62-278e-4d32-924f-aa7401d1e7cb","Type":"ContainerDied","Data":"60126b8ef646d6f2e3cbe9f6a98adfc0005b4ee202d439378b492421511115b3"} Jan 31 03:51:51 crc kubenswrapper[4667]: I0131 03:51:51.579416 4667 scope.go:117] "RemoveContainer" containerID="a9b73582f7941d912f8d6b31cfc6afd6daab615035e5c7cafd654529229119ac" Jan 31 03:51:51 crc kubenswrapper[4667]: I0131 03:51:51.579650 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-btrmv" Jan 31 03:51:51 crc kubenswrapper[4667]: I0131 03:51:51.604387 4667 scope.go:117] "RemoveContainer" containerID="704a22d4c0eb7714ab53f00a5c160af18b1cb7fc69774460a9e2ef365243dbfa" Jan 31 03:51:51 crc kubenswrapper[4667]: I0131 03:51:51.634767 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fz7n8" podStartSLOduration=3.78839727 podStartE2EDuration="1m11.634748231s" podCreationTimestamp="2026-01-31 03:50:40 +0000 UTC" firstStartedPulling="2026-01-31 03:50:42.655643764 +0000 UTC m=+166.171979053" lastFinishedPulling="2026-01-31 03:51:50.501994705 +0000 UTC m=+234.018330014" observedRunningTime="2026-01-31 03:51:51.601572577 +0000 UTC m=+235.117907876" watchObservedRunningTime="2026-01-31 03:51:51.634748231 +0000 UTC m=+235.151083530" Jan 31 03:51:51 crc kubenswrapper[4667]: I0131 03:51:51.650288 4667 scope.go:117] "RemoveContainer" containerID="cb84d7ab1c369dfe53a153134d9e0fcc84637b52dbd1c111bca29911b2b15d86" Jan 31 03:51:51 crc kubenswrapper[4667]: I0131 03:51:51.693296 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-btrmv"] Jan 31 03:51:51 crc kubenswrapper[4667]: I0131 03:51:51.700113 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-btrmv"] Jan 31 03:51:51 crc kubenswrapper[4667]: I0131 03:51:51.871750 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5pmpf" podUID="fd00c210-5665-4575-a5a3-413a89f5c03a" containerName="registry-server" probeResult="failure" output=< Jan 31 03:51:51 crc kubenswrapper[4667]: timeout: failed to connect service ":50051" within 1s Jan 31 03:51:51 crc kubenswrapper[4667]: > Jan 31 03:51:52 crc kubenswrapper[4667]: I0131 03:51:52.593011 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-52dpw" event={"ID":"8ac842ab-5970-4e4c-81ee-2f8b40d61416","Type":"ContainerStarted","Data":"84453c0f56c0a60e7814ecf6f12cb59dbeb6114687bd50afdcf412564e448680"} Jan 31 03:51:52 crc kubenswrapper[4667]: I0131 03:51:52.595257 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4lz7l" event={"ID":"7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6","Type":"ContainerStarted","Data":"650ff2bbf8345fc13923d8ca8414b911d79ba6f83c47772614c73467b9b8e9f8"} Jan 31 03:51:52 crc kubenswrapper[4667]: I0131 03:51:52.618600 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-52dpw" podStartSLOduration=5.261709208 podStartE2EDuration="1m15.618574096s" podCreationTimestamp="2026-01-31 03:50:37 +0000 UTC" firstStartedPulling="2026-01-31 03:50:41.632522893 +0000 UTC m=+165.148858202" lastFinishedPulling="2026-01-31 03:51:51.989387791 +0000 UTC m=+235.505723090" observedRunningTime="2026-01-31 03:51:52.617770273 +0000 UTC m=+236.134105572" watchObservedRunningTime="2026-01-31 03:51:52.618574096 +0000 UTC m=+236.134909395" Jan 31 03:51:53 crc kubenswrapper[4667]: I0131 03:51:53.288205 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2720ee62-278e-4d32-924f-aa7401d1e7cb" path="/var/lib/kubelet/pods/2720ee62-278e-4d32-924f-aa7401d1e7cb/volumes" Jan 31 03:51:57 crc kubenswrapper[4667]: I0131 03:51:57.230476 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4lz7l" Jan 31 03:51:57 crc kubenswrapper[4667]: I0131 03:51:57.230909 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4lz7l" Jan 31 03:51:57 crc kubenswrapper[4667]: I0131 03:51:57.318576 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4lz7l" Jan 31 03:51:57 crc kubenswrapper[4667]: I0131 03:51:57.345376 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4lz7l" podStartSLOduration=7.800524538 podStartE2EDuration="1m21.345257533s" podCreationTimestamp="2026-01-31 03:50:36 +0000 UTC" firstStartedPulling="2026-01-31 03:50:38.50024074 +0000 UTC m=+162.016576039" lastFinishedPulling="2026-01-31 03:51:52.044973735 +0000 UTC m=+235.561309034" observedRunningTime="2026-01-31 03:51:52.637054036 +0000 UTC m=+236.153389335" watchObservedRunningTime="2026-01-31 03:51:57.345257533 +0000 UTC m=+240.861592832" Jan 31 03:51:57 crc kubenswrapper[4667]: I0131 03:51:57.410808 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lh6xn" Jan 31 03:51:57 crc kubenswrapper[4667]: I0131 03:51:57.411089 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lh6xn" Jan 31 03:51:57 crc kubenswrapper[4667]: I0131 03:51:57.459985 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lh6xn" Jan 31 03:51:57 crc kubenswrapper[4667]: I0131 03:51:57.668045 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lh6xn" Jan 31 03:51:57 crc kubenswrapper[4667]: I0131 03:51:57.668123 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4lz7l" Jan 31 03:51:57 crc kubenswrapper[4667]: I0131 03:51:57.819739 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-52dpw" Jan 31 03:51:57 crc kubenswrapper[4667]: I0131 03:51:57.819801 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-52dpw" Jan 31 03:51:57 crc kubenswrapper[4667]: I0131 03:51:57.871391 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-52dpw" Jan 31 03:51:58 crc kubenswrapper[4667]: I0131 03:51:58.685704 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-52dpw" Jan 31 03:51:59 crc kubenswrapper[4667]: I0131 03:51:59.800469 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-52dpw"] Jan 31 03:52:00 crc kubenswrapper[4667]: I0131 03:52:00.478474 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fz7n8" Jan 31 03:52:00 crc kubenswrapper[4667]: I0131 03:52:00.478535 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fz7n8" Jan 31 03:52:00 crc kubenswrapper[4667]: I0131 03:52:00.526666 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fz7n8" Jan 31 03:52:00 crc kubenswrapper[4667]: I0131 03:52:00.647463 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-52dpw" podUID="8ac842ab-5970-4e4c-81ee-2f8b40d61416" containerName="registry-server" containerID="cri-o://84453c0f56c0a60e7814ecf6f12cb59dbeb6114687bd50afdcf412564e448680" gracePeriod=2 Jan 31 03:52:00 crc kubenswrapper[4667]: I0131 03:52:00.688465 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fz7n8" Jan 31 03:52:00 crc kubenswrapper[4667]: I0131 03:52:00.881323 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5pmpf" Jan 31 03:52:00 crc kubenswrapper[4667]: I0131 03:52:00.925936 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5pmpf" Jan 31 03:52:00 crc kubenswrapper[4667]: I0131 03:52:00.999547 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-52dpw" Jan 31 03:52:01 crc kubenswrapper[4667]: I0131 03:52:01.076953 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdvrd\" (UniqueName: \"kubernetes.io/projected/8ac842ab-5970-4e4c-81ee-2f8b40d61416-kube-api-access-zdvrd\") pod \"8ac842ab-5970-4e4c-81ee-2f8b40d61416\" (UID: \"8ac842ab-5970-4e4c-81ee-2f8b40d61416\") " Jan 31 03:52:01 crc kubenswrapper[4667]: I0131 03:52:01.077007 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ac842ab-5970-4e4c-81ee-2f8b40d61416-utilities\") pod \"8ac842ab-5970-4e4c-81ee-2f8b40d61416\" (UID: \"8ac842ab-5970-4e4c-81ee-2f8b40d61416\") " Jan 31 03:52:01 crc kubenswrapper[4667]: I0131 03:52:01.077087 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ac842ab-5970-4e4c-81ee-2f8b40d61416-catalog-content\") pod \"8ac842ab-5970-4e4c-81ee-2f8b40d61416\" (UID: \"8ac842ab-5970-4e4c-81ee-2f8b40d61416\") " Jan 31 03:52:01 crc kubenswrapper[4667]: I0131 03:52:01.078910 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ac842ab-5970-4e4c-81ee-2f8b40d61416-utilities" (OuterVolumeSpecName: "utilities") pod "8ac842ab-5970-4e4c-81ee-2f8b40d61416" (UID: "8ac842ab-5970-4e4c-81ee-2f8b40d61416"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:52:01 crc kubenswrapper[4667]: I0131 03:52:01.084104 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ac842ab-5970-4e4c-81ee-2f8b40d61416-kube-api-access-zdvrd" (OuterVolumeSpecName: "kube-api-access-zdvrd") pod "8ac842ab-5970-4e4c-81ee-2f8b40d61416" (UID: "8ac842ab-5970-4e4c-81ee-2f8b40d61416"). InnerVolumeSpecName "kube-api-access-zdvrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:52:01 crc kubenswrapper[4667]: I0131 03:52:01.128451 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ac842ab-5970-4e4c-81ee-2f8b40d61416-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ac842ab-5970-4e4c-81ee-2f8b40d61416" (UID: "8ac842ab-5970-4e4c-81ee-2f8b40d61416"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:52:01 crc kubenswrapper[4667]: I0131 03:52:01.178975 4667 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ac842ab-5970-4e4c-81ee-2f8b40d61416-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 03:52:01 crc kubenswrapper[4667]: I0131 03:52:01.179011 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdvrd\" (UniqueName: \"kubernetes.io/projected/8ac842ab-5970-4e4c-81ee-2f8b40d61416-kube-api-access-zdvrd\") on node \"crc\" DevicePath \"\"" Jan 31 03:52:01 crc kubenswrapper[4667]: I0131 03:52:01.179022 4667 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ac842ab-5970-4e4c-81ee-2f8b40d61416-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 03:52:01 crc kubenswrapper[4667]: I0131 03:52:01.663876 4667 generic.go:334] "Generic (PLEG): container finished" podID="8ac842ab-5970-4e4c-81ee-2f8b40d61416" containerID="84453c0f56c0a60e7814ecf6f12cb59dbeb6114687bd50afdcf412564e448680" exitCode=0 Jan 31 03:52:01 crc kubenswrapper[4667]: I0131 03:52:01.664971 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-52dpw" event={"ID":"8ac842ab-5970-4e4c-81ee-2f8b40d61416","Type":"ContainerDied","Data":"84453c0f56c0a60e7814ecf6f12cb59dbeb6114687bd50afdcf412564e448680"} Jan 31 03:52:01 crc kubenswrapper[4667]: I0131 03:52:01.665008 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-52dpw" event={"ID":"8ac842ab-5970-4e4c-81ee-2f8b40d61416","Type":"ContainerDied","Data":"22619f829f6de8217eace1f319c72679a525d463d87ca30e1117001eb4fa47e8"} Jan 31 03:52:01 crc kubenswrapper[4667]: I0131 03:52:01.665028 4667 scope.go:117] "RemoveContainer" containerID="84453c0f56c0a60e7814ecf6f12cb59dbeb6114687bd50afdcf412564e448680" Jan 31 03:52:01 crc kubenswrapper[4667]: I0131 03:52:01.665139 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-52dpw" Jan 31 03:52:01 crc kubenswrapper[4667]: I0131 03:52:01.695247 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-52dpw"] Jan 31 03:52:01 crc kubenswrapper[4667]: I0131 03:52:01.697005 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-52dpw"] Jan 31 03:52:01 crc kubenswrapper[4667]: I0131 03:52:01.703395 4667 scope.go:117] "RemoveContainer" containerID="d6dab8aece37cea18f7e20c3cbcdd156c55c09eb4d0e1973ad5b4d56ec00cc05" Jan 31 03:52:01 crc kubenswrapper[4667]: I0131 03:52:01.730427 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" podUID="21469e62-0345-41f0-a07b-eac67df38faf" containerName="oauth-openshift" containerID="cri-o://a5ca1121b9f8156bdbdb88c33f3f31395dcd7420156ff737b9cded03e3febfb9" gracePeriod=15 Jan 31 03:52:01 crc kubenswrapper[4667]: I0131 03:52:01.732647 4667 scope.go:117] "RemoveContainer" containerID="835895e1b1da40ff7ed1ffa80d61bc766b4099e15542571008b3a1b336b30b94" Jan 31 03:52:01 crc kubenswrapper[4667]: I0131 03:52:01.757803 4667 scope.go:117] "RemoveContainer" containerID="84453c0f56c0a60e7814ecf6f12cb59dbeb6114687bd50afdcf412564e448680" Jan 31 03:52:01 crc kubenswrapper[4667]: E0131 03:52:01.758450 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84453c0f56c0a60e7814ecf6f12cb59dbeb6114687bd50afdcf412564e448680\": container with ID starting with 84453c0f56c0a60e7814ecf6f12cb59dbeb6114687bd50afdcf412564e448680 not found: ID does not exist" containerID="84453c0f56c0a60e7814ecf6f12cb59dbeb6114687bd50afdcf412564e448680" Jan 31 03:52:01 crc kubenswrapper[4667]: I0131 03:52:01.758536 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84453c0f56c0a60e7814ecf6f12cb59dbeb6114687bd50afdcf412564e448680"} err="failed to get container status \"84453c0f56c0a60e7814ecf6f12cb59dbeb6114687bd50afdcf412564e448680\": rpc error: code = NotFound desc = could not find container \"84453c0f56c0a60e7814ecf6f12cb59dbeb6114687bd50afdcf412564e448680\": container with ID starting with 84453c0f56c0a60e7814ecf6f12cb59dbeb6114687bd50afdcf412564e448680 not found: ID does not exist" Jan 31 03:52:01 crc kubenswrapper[4667]: I0131 03:52:01.758571 4667 scope.go:117] "RemoveContainer" containerID="d6dab8aece37cea18f7e20c3cbcdd156c55c09eb4d0e1973ad5b4d56ec00cc05" Jan 31 03:52:01 crc kubenswrapper[4667]: E0131 03:52:01.759101 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6dab8aece37cea18f7e20c3cbcdd156c55c09eb4d0e1973ad5b4d56ec00cc05\": container with ID starting with d6dab8aece37cea18f7e20c3cbcdd156c55c09eb4d0e1973ad5b4d56ec00cc05 not found: ID does not exist" containerID="d6dab8aece37cea18f7e20c3cbcdd156c55c09eb4d0e1973ad5b4d56ec00cc05" Jan 31 03:52:01 crc kubenswrapper[4667]: I0131 03:52:01.759182 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6dab8aece37cea18f7e20c3cbcdd156c55c09eb4d0e1973ad5b4d56ec00cc05"} err="failed to get container status \"d6dab8aece37cea18f7e20c3cbcdd156c55c09eb4d0e1973ad5b4d56ec00cc05\": rpc error: code = NotFound desc = could not find container \"d6dab8aece37cea18f7e20c3cbcdd156c55c09eb4d0e1973ad5b4d56ec00cc05\": container with ID starting with d6dab8aece37cea18f7e20c3cbcdd156c55c09eb4d0e1973ad5b4d56ec00cc05 not found: ID does not exist" Jan 31 03:52:01 crc kubenswrapper[4667]: I0131 03:52:01.759199 4667 scope.go:117] "RemoveContainer" containerID="835895e1b1da40ff7ed1ffa80d61bc766b4099e15542571008b3a1b336b30b94" Jan 31 03:52:01 crc kubenswrapper[4667]: E0131 03:52:01.759758 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"835895e1b1da40ff7ed1ffa80d61bc766b4099e15542571008b3a1b336b30b94\": container with ID starting with 835895e1b1da40ff7ed1ffa80d61bc766b4099e15542571008b3a1b336b30b94 not found: ID does not exist" containerID="835895e1b1da40ff7ed1ffa80d61bc766b4099e15542571008b3a1b336b30b94" Jan 31 03:52:01 crc kubenswrapper[4667]: I0131 03:52:01.759788 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"835895e1b1da40ff7ed1ffa80d61bc766b4099e15542571008b3a1b336b30b94"} err="failed to get container status \"835895e1b1da40ff7ed1ffa80d61bc766b4099e15542571008b3a1b336b30b94\": rpc error: code = NotFound desc = could not find container \"835895e1b1da40ff7ed1ffa80d61bc766b4099e15542571008b3a1b336b30b94\": container with ID starting with 835895e1b1da40ff7ed1ffa80d61bc766b4099e15542571008b3a1b336b30b94 not found: ID does not exist" Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.636096 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.679275 4667 generic.go:334] "Generic (PLEG): container finished" podID="21469e62-0345-41f0-a07b-eac67df38faf" containerID="a5ca1121b9f8156bdbdb88c33f3f31395dcd7420156ff737b9cded03e3febfb9" exitCode=0 Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.679320 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" event={"ID":"21469e62-0345-41f0-a07b-eac67df38faf","Type":"ContainerDied","Data":"a5ca1121b9f8156bdbdb88c33f3f31395dcd7420156ff737b9cded03e3febfb9"} Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.679349 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" event={"ID":"21469e62-0345-41f0-a07b-eac67df38faf","Type":"ContainerDied","Data":"f5fccc3b129ee1d13e7631cbec910946268d73dd54d2da22948f6411e6dcf949"} Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.679365 4667 scope.go:117] "RemoveContainer" containerID="a5ca1121b9f8156bdbdb88c33f3f31395dcd7420156ff737b9cded03e3febfb9" Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.679462 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-dmtcm" Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.701171 4667 scope.go:117] "RemoveContainer" containerID="a5ca1121b9f8156bdbdb88c33f3f31395dcd7420156ff737b9cded03e3febfb9" Jan 31 03:52:02 crc kubenswrapper[4667]: E0131 03:52:02.701686 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5ca1121b9f8156bdbdb88c33f3f31395dcd7420156ff737b9cded03e3febfb9\": container with ID starting with a5ca1121b9f8156bdbdb88c33f3f31395dcd7420156ff737b9cded03e3febfb9 not found: ID does not exist" containerID="a5ca1121b9f8156bdbdb88c33f3f31395dcd7420156ff737b9cded03e3febfb9" Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.701743 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5ca1121b9f8156bdbdb88c33f3f31395dcd7420156ff737b9cded03e3febfb9"} err="failed to get container status \"a5ca1121b9f8156bdbdb88c33f3f31395dcd7420156ff737b9cded03e3febfb9\": rpc error: code = NotFound desc = could not find container \"a5ca1121b9f8156bdbdb88c33f3f31395dcd7420156ff737b9cded03e3febfb9\": container with ID starting with a5ca1121b9f8156bdbdb88c33f3f31395dcd7420156ff737b9cded03e3febfb9 not found: ID does not exist" Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.796291 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5pmpf"] Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.796559 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5pmpf" podUID="fd00c210-5665-4575-a5a3-413a89f5c03a" containerName="registry-server" containerID="cri-o://a784507c4dc97618f1f424a9154b00db086d04778152f1fa6fcc795436273250" gracePeriod=2 Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.805294 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-system-cliconfig\") pod \"21469e62-0345-41f0-a07b-eac67df38faf\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.805616 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-user-template-provider-selection\") pod \"21469e62-0345-41f0-a07b-eac67df38faf\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.805741 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-system-serving-cert\") pod \"21469e62-0345-41f0-a07b-eac67df38faf\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.805886 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-user-template-login\") pod \"21469e62-0345-41f0-a07b-eac67df38faf\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.806039 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-system-service-ca\") pod \"21469e62-0345-41f0-a07b-eac67df38faf\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.806198 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlphk\" (UniqueName: \"kubernetes.io/projected/21469e62-0345-41f0-a07b-eac67df38faf-kube-api-access-tlphk\") pod \"21469e62-0345-41f0-a07b-eac67df38faf\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.806342 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-system-trusted-ca-bundle\") pod \"21469e62-0345-41f0-a07b-eac67df38faf\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.806521 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-system-session\") pod \"21469e62-0345-41f0-a07b-eac67df38faf\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.806719 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/21469e62-0345-41f0-a07b-eac67df38faf-audit-dir\") pod \"21469e62-0345-41f0-a07b-eac67df38faf\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.806967 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-user-idp-0-file-data\") pod \"21469e62-0345-41f0-a07b-eac67df38faf\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.807175 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-user-template-error\") pod \"21469e62-0345-41f0-a07b-eac67df38faf\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.807368 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-system-ocp-branding-template\") pod \"21469e62-0345-41f0-a07b-eac67df38faf\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.807548 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-system-router-certs\") pod \"21469e62-0345-41f0-a07b-eac67df38faf\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.807711 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/21469e62-0345-41f0-a07b-eac67df38faf-audit-policies\") pod \"21469e62-0345-41f0-a07b-eac67df38faf\" (UID: \"21469e62-0345-41f0-a07b-eac67df38faf\") " Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.807090 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "21469e62-0345-41f0-a07b-eac67df38faf" (UID: "21469e62-0345-41f0-a07b-eac67df38faf"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.807131 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "21469e62-0345-41f0-a07b-eac67df38faf" (UID: "21469e62-0345-41f0-a07b-eac67df38faf"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.807176 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21469e62-0345-41f0-a07b-eac67df38faf-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "21469e62-0345-41f0-a07b-eac67df38faf" (UID: "21469e62-0345-41f0-a07b-eac67df38faf"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.807585 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "21469e62-0345-41f0-a07b-eac67df38faf" (UID: "21469e62-0345-41f0-a07b-eac67df38faf"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.809020 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21469e62-0345-41f0-a07b-eac67df38faf-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "21469e62-0345-41f0-a07b-eac67df38faf" (UID: "21469e62-0345-41f0-a07b-eac67df38faf"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.811577 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "21469e62-0345-41f0-a07b-eac67df38faf" (UID: "21469e62-0345-41f0-a07b-eac67df38faf"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.811930 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "21469e62-0345-41f0-a07b-eac67df38faf" (UID: "21469e62-0345-41f0-a07b-eac67df38faf"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.812405 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "21469e62-0345-41f0-a07b-eac67df38faf" (UID: "21469e62-0345-41f0-a07b-eac67df38faf"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.814604 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "21469e62-0345-41f0-a07b-eac67df38faf" (UID: "21469e62-0345-41f0-a07b-eac67df38faf"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.815494 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21469e62-0345-41f0-a07b-eac67df38faf-kube-api-access-tlphk" (OuterVolumeSpecName: "kube-api-access-tlphk") pod "21469e62-0345-41f0-a07b-eac67df38faf" (UID: "21469e62-0345-41f0-a07b-eac67df38faf"). InnerVolumeSpecName "kube-api-access-tlphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.818615 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "21469e62-0345-41f0-a07b-eac67df38faf" (UID: "21469e62-0345-41f0-a07b-eac67df38faf"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.818674 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "21469e62-0345-41f0-a07b-eac67df38faf" (UID: "21469e62-0345-41f0-a07b-eac67df38faf"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.820749 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "21469e62-0345-41f0-a07b-eac67df38faf" (UID: "21469e62-0345-41f0-a07b-eac67df38faf"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.826425 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "21469e62-0345-41f0-a07b-eac67df38faf" (UID: "21469e62-0345-41f0-a07b-eac67df38faf"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.909423 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlphk\" (UniqueName: \"kubernetes.io/projected/21469e62-0345-41f0-a07b-eac67df38faf-kube-api-access-tlphk\") on node \"crc\" DevicePath \"\"" Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.909463 4667 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.909475 4667 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.909486 4667 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/21469e62-0345-41f0-a07b-eac67df38faf-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.909496 4667 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.909506 4667 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.909517 4667 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.909528 4667 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.909539 4667 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/21469e62-0345-41f0-a07b-eac67df38faf-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.909548 4667 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.909558 4667 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.909568 4667 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.909576 4667 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 31 03:52:02 crc kubenswrapper[4667]: I0131 03:52:02.909585 4667 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/21469e62-0345-41f0-a07b-eac67df38faf-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:52:03 crc kubenswrapper[4667]: I0131 03:52:03.023758 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dmtcm"] Jan 31 03:52:03 crc kubenswrapper[4667]: I0131 03:52:03.030905 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dmtcm"] Jan 31 03:52:03 crc kubenswrapper[4667]: I0131 03:52:03.129185 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5pmpf" Jan 31 03:52:03 crc kubenswrapper[4667]: I0131 03:52:03.215973 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd00c210-5665-4575-a5a3-413a89f5c03a-catalog-content\") pod \"fd00c210-5665-4575-a5a3-413a89f5c03a\" (UID: \"fd00c210-5665-4575-a5a3-413a89f5c03a\") " Jan 31 03:52:03 crc kubenswrapper[4667]: I0131 03:52:03.216591 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd00c210-5665-4575-a5a3-413a89f5c03a-utilities\") pod \"fd00c210-5665-4575-a5a3-413a89f5c03a\" (UID: \"fd00c210-5665-4575-a5a3-413a89f5c03a\") " Jan 31 03:52:03 crc kubenswrapper[4667]: I0131 03:52:03.216678 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99562\" (UniqueName: \"kubernetes.io/projected/fd00c210-5665-4575-a5a3-413a89f5c03a-kube-api-access-99562\") pod \"fd00c210-5665-4575-a5a3-413a89f5c03a\" (UID: \"fd00c210-5665-4575-a5a3-413a89f5c03a\") " Jan 31 03:52:03 crc kubenswrapper[4667]: I0131 03:52:03.217212 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd00c210-5665-4575-a5a3-413a89f5c03a-utilities" (OuterVolumeSpecName: "utilities") pod "fd00c210-5665-4575-a5a3-413a89f5c03a" (UID: "fd00c210-5665-4575-a5a3-413a89f5c03a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:52:03 crc kubenswrapper[4667]: I0131 03:52:03.220596 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd00c210-5665-4575-a5a3-413a89f5c03a-kube-api-access-99562" (OuterVolumeSpecName: "kube-api-access-99562") pod "fd00c210-5665-4575-a5a3-413a89f5c03a" (UID: "fd00c210-5665-4575-a5a3-413a89f5c03a"). InnerVolumeSpecName "kube-api-access-99562". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:52:03 crc kubenswrapper[4667]: I0131 03:52:03.290965 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21469e62-0345-41f0-a07b-eac67df38faf" path="/var/lib/kubelet/pods/21469e62-0345-41f0-a07b-eac67df38faf/volumes" Jan 31 03:52:03 crc kubenswrapper[4667]: I0131 03:52:03.292061 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ac842ab-5970-4e4c-81ee-2f8b40d61416" path="/var/lib/kubelet/pods/8ac842ab-5970-4e4c-81ee-2f8b40d61416/volumes" Jan 31 03:52:03 crc kubenswrapper[4667]: I0131 03:52:03.317992 4667 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd00c210-5665-4575-a5a3-413a89f5c03a-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 03:52:03 crc kubenswrapper[4667]: I0131 03:52:03.318022 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99562\" (UniqueName: \"kubernetes.io/projected/fd00c210-5665-4575-a5a3-413a89f5c03a-kube-api-access-99562\") on node \"crc\" DevicePath \"\"" Jan 31 03:52:03 crc kubenswrapper[4667]: I0131 03:52:03.370780 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd00c210-5665-4575-a5a3-413a89f5c03a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fd00c210-5665-4575-a5a3-413a89f5c03a" (UID: "fd00c210-5665-4575-a5a3-413a89f5c03a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:52:03 crc kubenswrapper[4667]: I0131 03:52:03.420385 4667 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd00c210-5665-4575-a5a3-413a89f5c03a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 03:52:03 crc kubenswrapper[4667]: I0131 03:52:03.690263 4667 generic.go:334] "Generic (PLEG): container finished" podID="fd00c210-5665-4575-a5a3-413a89f5c03a" containerID="a784507c4dc97618f1f424a9154b00db086d04778152f1fa6fcc795436273250" exitCode=0 Jan 31 03:52:03 crc kubenswrapper[4667]: I0131 03:52:03.690320 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pmpf" event={"ID":"fd00c210-5665-4575-a5a3-413a89f5c03a","Type":"ContainerDied","Data":"a784507c4dc97618f1f424a9154b00db086d04778152f1fa6fcc795436273250"} Jan 31 03:52:03 crc kubenswrapper[4667]: I0131 03:52:03.690354 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pmpf" event={"ID":"fd00c210-5665-4575-a5a3-413a89f5c03a","Type":"ContainerDied","Data":"2b2e9b443ad3fd9d3d4916a4f1395ac31bab9d9eb0b2bc974943fdcbdf07f55e"} Jan 31 03:52:03 crc kubenswrapper[4667]: I0131 03:52:03.690377 4667 scope.go:117] "RemoveContainer" containerID="a784507c4dc97618f1f424a9154b00db086d04778152f1fa6fcc795436273250" Jan 31 03:52:03 crc kubenswrapper[4667]: I0131 03:52:03.690371 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5pmpf" Jan 31 03:52:03 crc kubenswrapper[4667]: I0131 03:52:03.706303 4667 scope.go:117] "RemoveContainer" containerID="0c4ffbc5a6a136c4802a1c7af9791ad9ab1b5c1d368e4af6c98a4cb0fc93a752" Jan 31 03:52:03 crc kubenswrapper[4667]: I0131 03:52:03.726981 4667 scope.go:117] "RemoveContainer" containerID="c7da08564baa6bef2b68e9f2a7eb8db95e62f86c2bcad37cf89d40606e3162b9" Jan 31 03:52:03 crc kubenswrapper[4667]: I0131 03:52:03.728663 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5pmpf"] Jan 31 03:52:03 crc kubenswrapper[4667]: I0131 03:52:03.736092 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5pmpf"] Jan 31 03:52:03 crc kubenswrapper[4667]: I0131 03:52:03.746709 4667 scope.go:117] "RemoveContainer" containerID="a784507c4dc97618f1f424a9154b00db086d04778152f1fa6fcc795436273250" Jan 31 03:52:03 crc kubenswrapper[4667]: E0131 03:52:03.749069 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a784507c4dc97618f1f424a9154b00db086d04778152f1fa6fcc795436273250\": container with ID starting with a784507c4dc97618f1f424a9154b00db086d04778152f1fa6fcc795436273250 not found: ID does not exist" containerID="a784507c4dc97618f1f424a9154b00db086d04778152f1fa6fcc795436273250" Jan 31 03:52:03 crc kubenswrapper[4667]: I0131 03:52:03.749134 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a784507c4dc97618f1f424a9154b00db086d04778152f1fa6fcc795436273250"} err="failed to get container status \"a784507c4dc97618f1f424a9154b00db086d04778152f1fa6fcc795436273250\": rpc error: code = NotFound desc = could not find container \"a784507c4dc97618f1f424a9154b00db086d04778152f1fa6fcc795436273250\": container with ID starting with a784507c4dc97618f1f424a9154b00db086d04778152f1fa6fcc795436273250 not found: ID does not exist" Jan 31 03:52:03 crc kubenswrapper[4667]: I0131 03:52:03.749174 4667 scope.go:117] "RemoveContainer" containerID="0c4ffbc5a6a136c4802a1c7af9791ad9ab1b5c1d368e4af6c98a4cb0fc93a752" Jan 31 03:52:03 crc kubenswrapper[4667]: E0131 03:52:03.749484 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c4ffbc5a6a136c4802a1c7af9791ad9ab1b5c1d368e4af6c98a4cb0fc93a752\": container with ID starting with 0c4ffbc5a6a136c4802a1c7af9791ad9ab1b5c1d368e4af6c98a4cb0fc93a752 not found: ID does not exist" containerID="0c4ffbc5a6a136c4802a1c7af9791ad9ab1b5c1d368e4af6c98a4cb0fc93a752" Jan 31 03:52:03 crc kubenswrapper[4667]: I0131 03:52:03.749545 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c4ffbc5a6a136c4802a1c7af9791ad9ab1b5c1d368e4af6c98a4cb0fc93a752"} err="failed to get container status \"0c4ffbc5a6a136c4802a1c7af9791ad9ab1b5c1d368e4af6c98a4cb0fc93a752\": rpc error: code = NotFound desc = could not find container \"0c4ffbc5a6a136c4802a1c7af9791ad9ab1b5c1d368e4af6c98a4cb0fc93a752\": container with ID starting with 0c4ffbc5a6a136c4802a1c7af9791ad9ab1b5c1d368e4af6c98a4cb0fc93a752 not found: ID does not exist" Jan 31 03:52:03 crc kubenswrapper[4667]: I0131 03:52:03.749571 4667 scope.go:117] "RemoveContainer" containerID="c7da08564baa6bef2b68e9f2a7eb8db95e62f86c2bcad37cf89d40606e3162b9" Jan 31 03:52:03 crc kubenswrapper[4667]: E0131 03:52:03.750009 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7da08564baa6bef2b68e9f2a7eb8db95e62f86c2bcad37cf89d40606e3162b9\": container with ID starting with c7da08564baa6bef2b68e9f2a7eb8db95e62f86c2bcad37cf89d40606e3162b9 not found: ID does not exist" containerID="c7da08564baa6bef2b68e9f2a7eb8db95e62f86c2bcad37cf89d40606e3162b9" Jan 31 03:52:03 crc kubenswrapper[4667]: I0131 03:52:03.750051 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7da08564baa6bef2b68e9f2a7eb8db95e62f86c2bcad37cf89d40606e3162b9"} err="failed to get container status \"c7da08564baa6bef2b68e9f2a7eb8db95e62f86c2bcad37cf89d40606e3162b9\": rpc error: code = NotFound desc = could not find container \"c7da08564baa6bef2b68e9f2a7eb8db95e62f86c2bcad37cf89d40606e3162b9\": container with ID starting with c7da08564baa6bef2b68e9f2a7eb8db95e62f86c2bcad37cf89d40606e3162b9 not found: ID does not exist" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.570993 4667 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 03:52:04 crc kubenswrapper[4667]: E0131 03:52:04.571339 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd00c210-5665-4575-a5a3-413a89f5c03a" containerName="registry-server" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.571359 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd00c210-5665-4575-a5a3-413a89f5c03a" containerName="registry-server" Jan 31 03:52:04 crc kubenswrapper[4667]: E0131 03:52:04.571375 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ac842ab-5970-4e4c-81ee-2f8b40d61416" containerName="extract-content" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.571386 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ac842ab-5970-4e4c-81ee-2f8b40d61416" containerName="extract-content" Jan 31 03:52:04 crc kubenswrapper[4667]: E0131 03:52:04.571408 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2afc75c6-1f04-4acb-b958-4159c2764e5e" containerName="registry-server" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.571419 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="2afc75c6-1f04-4acb-b958-4159c2764e5e" containerName="registry-server" Jan 31 03:52:04 crc kubenswrapper[4667]: E0131 03:52:04.571431 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2afc75c6-1f04-4acb-b958-4159c2764e5e" containerName="extract-content" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.571442 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="2afc75c6-1f04-4acb-b958-4159c2764e5e" containerName="extract-content" Jan 31 03:52:04 crc kubenswrapper[4667]: E0131 03:52:04.571458 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd00c210-5665-4575-a5a3-413a89f5c03a" containerName="extract-content" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.571472 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd00c210-5665-4575-a5a3-413a89f5c03a" containerName="extract-content" Jan 31 03:52:04 crc kubenswrapper[4667]: E0131 03:52:04.571488 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd00c210-5665-4575-a5a3-413a89f5c03a" containerName="extract-utilities" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.571498 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd00c210-5665-4575-a5a3-413a89f5c03a" containerName="extract-utilities" Jan 31 03:52:04 crc kubenswrapper[4667]: E0131 03:52:04.571512 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2720ee62-278e-4d32-924f-aa7401d1e7cb" containerName="extract-utilities" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.571522 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="2720ee62-278e-4d32-924f-aa7401d1e7cb" containerName="extract-utilities" Jan 31 03:52:04 crc kubenswrapper[4667]: E0131 03:52:04.571532 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2afc75c6-1f04-4acb-b958-4159c2764e5e" containerName="extract-utilities" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.571547 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="2afc75c6-1f04-4acb-b958-4159c2764e5e" containerName="extract-utilities" Jan 31 03:52:04 crc kubenswrapper[4667]: E0131 03:52:04.571562 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2720ee62-278e-4d32-924f-aa7401d1e7cb" containerName="extract-content" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.571573 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="2720ee62-278e-4d32-924f-aa7401d1e7cb" containerName="extract-content" Jan 31 03:52:04 crc kubenswrapper[4667]: E0131 03:52:04.571592 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2720ee62-278e-4d32-924f-aa7401d1e7cb" containerName="registry-server" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.571602 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="2720ee62-278e-4d32-924f-aa7401d1e7cb" containerName="registry-server" Jan 31 03:52:04 crc kubenswrapper[4667]: E0131 03:52:04.571617 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ac842ab-5970-4e4c-81ee-2f8b40d61416" containerName="registry-server" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.571631 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ac842ab-5970-4e4c-81ee-2f8b40d61416" containerName="registry-server" Jan 31 03:52:04 crc kubenswrapper[4667]: E0131 03:52:04.571648 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ac842ab-5970-4e4c-81ee-2f8b40d61416" containerName="extract-utilities" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.571658 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ac842ab-5970-4e4c-81ee-2f8b40d61416" containerName="extract-utilities" Jan 31 03:52:04 crc kubenswrapper[4667]: E0131 03:52:04.571676 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21469e62-0345-41f0-a07b-eac67df38faf" containerName="oauth-openshift" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.571687 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="21469e62-0345-41f0-a07b-eac67df38faf" containerName="oauth-openshift" Jan 31 03:52:04 crc kubenswrapper[4667]: E0131 03:52:04.571705 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="537c03c8-f93a-41f7-a072-ce6a5485e72c" containerName="pruner" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.571716 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="537c03c8-f93a-41f7-a072-ce6a5485e72c" containerName="pruner" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.571917 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="537c03c8-f93a-41f7-a072-ce6a5485e72c" containerName="pruner" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.571942 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ac842ab-5970-4e4c-81ee-2f8b40d61416" containerName="registry-server" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.571959 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd00c210-5665-4575-a5a3-413a89f5c03a" containerName="registry-server" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.571979 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="2720ee62-278e-4d32-924f-aa7401d1e7cb" containerName="registry-server" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.571996 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="21469e62-0345-41f0-a07b-eac67df38faf" containerName="oauth-openshift" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.572011 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="2afc75c6-1f04-4acb-b958-4159c2764e5e" containerName="registry-server" Jan 31 03:52:04 crc kubenswrapper[4667]: E0131 03:52:04.572043 4667 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/kube-apiserver-pod.yaml\": /etc/kubernetes/manifests/kube-apiserver-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.572480 4667 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.572563 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.572870 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec" gracePeriod=15 Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.572929 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e" gracePeriod=15 Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.572952 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223" gracePeriod=15 Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.573024 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e" gracePeriod=15 Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.573102 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe" gracePeriod=15 Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.573810 4667 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 03:52:04 crc kubenswrapper[4667]: E0131 03:52:04.575128 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.575153 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 31 03:52:04 crc kubenswrapper[4667]: E0131 03:52:04.575165 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.575173 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 31 03:52:04 crc kubenswrapper[4667]: E0131 03:52:04.575183 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.575189 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 03:52:04 crc kubenswrapper[4667]: E0131 03:52:04.575200 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.575208 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 31 03:52:04 crc kubenswrapper[4667]: E0131 03:52:04.575221 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.575228 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 31 03:52:04 crc kubenswrapper[4667]: E0131 03:52:04.575246 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.575253 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.575403 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.575416 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.575431 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.575443 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.575453 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.575463 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 03:52:04 crc kubenswrapper[4667]: E0131 03:52:04.575571 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.575581 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.621646 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.707686 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.709328 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.710221 4667 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe" exitCode=0 Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.710247 4667 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e" exitCode=0 Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.710259 4667 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223" exitCode=0 Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.710269 4667 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e" exitCode=2 Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.710345 4667 scope.go:117] "RemoveContainer" containerID="8f9b77ac2a608254cc878b6c1fb67379deca6e3630cfad04dbfa7bb961a06051" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.738359 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.738443 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.738464 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.738609 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.738715 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.738772 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.738795 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.738867 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.839722 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.840049 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.840146 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.839881 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.840410 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.840484 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.840592 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.840658 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.840746 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.840820 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.840922 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.841021 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.840833 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.840998 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.840884 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.841092 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 03:52:04 crc kubenswrapper[4667]: I0131 03:52:04.911825 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 03:52:04 crc kubenswrapper[4667]: W0131 03:52:04.928924 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-5f38b417f658c4120e558f1b4705e31dee31b3c2a0af3c915f1b8ad216ecdaae WatchSource:0}: Error finding container 5f38b417f658c4120e558f1b4705e31dee31b3c2a0af3c915f1b8ad216ecdaae: Status 404 returned error can't find the container with id 5f38b417f658c4120e558f1b4705e31dee31b3c2a0af3c915f1b8ad216ecdaae Jan 31 03:52:04 crc kubenswrapper[4667]: E0131 03:52:04.934237 4667 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.111:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188fb4651bdab650 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 03:52:04.9319092 +0000 UTC m=+248.448244519,LastTimestamp:2026-01-31 03:52:04.9319092 +0000 UTC m=+248.448244519,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 03:52:05 crc kubenswrapper[4667]: E0131 03:52:05.271715 4667 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.111:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188fb4651bdab650 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 03:52:04.9319092 +0000 UTC m=+248.448244519,LastTimestamp:2026-01-31 03:52:04.9319092 +0000 UTC m=+248.448244519,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 03:52:05 crc kubenswrapper[4667]: I0131 03:52:05.289834 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd00c210-5665-4575-a5a3-413a89f5c03a" path="/var/lib/kubelet/pods/fd00c210-5665-4575-a5a3-413a89f5c03a/volumes" Jan 31 03:52:05 crc kubenswrapper[4667]: E0131 03:52:05.374377 4667 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.111:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" volumeName="registry-storage" Jan 31 03:52:05 crc kubenswrapper[4667]: I0131 03:52:05.725414 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 03:52:05 crc kubenswrapper[4667]: I0131 03:52:05.727985 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"2c6c729a2139023186db8d47849beefece1b15925f5567dae79c70255b22dea9"} Jan 31 03:52:05 crc kubenswrapper[4667]: I0131 03:52:05.728020 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"5f38b417f658c4120e558f1b4705e31dee31b3c2a0af3c915f1b8ad216ecdaae"} Jan 31 03:52:05 crc kubenswrapper[4667]: I0131 03:52:05.729056 4667 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Jan 31 03:52:05 crc kubenswrapper[4667]: I0131 03:52:05.730452 4667 generic.go:334] "Generic (PLEG): container finished" podID="b9115bbd-7154-4f77-8d1d-68b1e78a478a" containerID="84f2da8dbcfdcf5db87f14065eef1cf64cb6d2e34db17fe952d7085f69a5322e" exitCode=0 Jan 31 03:52:05 crc kubenswrapper[4667]: I0131 03:52:05.730483 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b9115bbd-7154-4f77-8d1d-68b1e78a478a","Type":"ContainerDied","Data":"84f2da8dbcfdcf5db87f14065eef1cf64cb6d2e34db17fe952d7085f69a5322e"} Jan 31 03:52:05 crc kubenswrapper[4667]: I0131 03:52:05.730785 4667 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Jan 31 03:52:05 crc kubenswrapper[4667]: I0131 03:52:05.731324 4667 status_manager.go:851] "Failed to get status for pod" podUID="b9115bbd-7154-4f77-8d1d-68b1e78a478a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Jan 31 03:52:06 crc kubenswrapper[4667]: I0131 03:52:06.945604 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 03:52:06 crc kubenswrapper[4667]: I0131 03:52:06.946498 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:52:06 crc kubenswrapper[4667]: I0131 03:52:06.947805 4667 status_manager.go:851] "Failed to get status for pod" podUID="b9115bbd-7154-4f77-8d1d-68b1e78a478a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Jan 31 03:52:06 crc kubenswrapper[4667]: I0131 03:52:06.948108 4667 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Jan 31 03:52:06 crc kubenswrapper[4667]: I0131 03:52:06.948572 4667 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Jan 31 03:52:06 crc kubenswrapper[4667]: I0131 03:52:06.983458 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 03:52:06 crc kubenswrapper[4667]: I0131 03:52:06.984042 4667 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Jan 31 03:52:06 crc kubenswrapper[4667]: I0131 03:52:06.984375 4667 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Jan 31 03:52:06 crc kubenswrapper[4667]: I0131 03:52:06.984706 4667 status_manager.go:851] "Failed to get status for pod" podUID="b9115bbd-7154-4f77-8d1d-68b1e78a478a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.096096 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b9115bbd-7154-4f77-8d1d-68b1e78a478a-kube-api-access\") pod \"b9115bbd-7154-4f77-8d1d-68b1e78a478a\" (UID: \"b9115bbd-7154-4f77-8d1d-68b1e78a478a\") " Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.096162 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.096181 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b9115bbd-7154-4f77-8d1d-68b1e78a478a-var-lock\") pod \"b9115bbd-7154-4f77-8d1d-68b1e78a478a\" (UID: \"b9115bbd-7154-4f77-8d1d-68b1e78a478a\") " Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.096212 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.096225 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.096280 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b9115bbd-7154-4f77-8d1d-68b1e78a478a-kubelet-dir\") pod \"b9115bbd-7154-4f77-8d1d-68b1e78a478a\" (UID: \"b9115bbd-7154-4f77-8d1d-68b1e78a478a\") " Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.096276 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9115bbd-7154-4f77-8d1d-68b1e78a478a-var-lock" (OuterVolumeSpecName: "var-lock") pod "b9115bbd-7154-4f77-8d1d-68b1e78a478a" (UID: "b9115bbd-7154-4f77-8d1d-68b1e78a478a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.096282 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.096294 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.096317 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9115bbd-7154-4f77-8d1d-68b1e78a478a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b9115bbd-7154-4f77-8d1d-68b1e78a478a" (UID: "b9115bbd-7154-4f77-8d1d-68b1e78a478a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.096333 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.097344 4667 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.097371 4667 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.097386 4667 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b9115bbd-7154-4f77-8d1d-68b1e78a478a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.097398 4667 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.097409 4667 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b9115bbd-7154-4f77-8d1d-68b1e78a478a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.101045 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9115bbd-7154-4f77-8d1d-68b1e78a478a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b9115bbd-7154-4f77-8d1d-68b1e78a478a" (UID: "b9115bbd-7154-4f77-8d1d-68b1e78a478a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.199409 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b9115bbd-7154-4f77-8d1d-68b1e78a478a-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.283813 4667 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.284231 4667 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.284521 4667 status_manager.go:851] "Failed to get status for pod" podUID="b9115bbd-7154-4f77-8d1d-68b1e78a478a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.293456 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 31 03:52:07 crc kubenswrapper[4667]: E0131 03:52:07.515039 4667 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.111:6443: connect: connection refused" Jan 31 03:52:07 crc kubenswrapper[4667]: E0131 03:52:07.516253 4667 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.111:6443: connect: connection refused" Jan 31 03:52:07 crc kubenswrapper[4667]: E0131 03:52:07.516737 4667 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.111:6443: connect: connection refused" Jan 31 03:52:07 crc kubenswrapper[4667]: E0131 03:52:07.517000 4667 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.111:6443: connect: connection refused" Jan 31 03:52:07 crc kubenswrapper[4667]: E0131 03:52:07.517255 4667 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.111:6443: connect: connection refused" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.517339 4667 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 31 03:52:07 crc kubenswrapper[4667]: E0131 03:52:07.517763 4667 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.111:6443: connect: connection refused" interval="200ms" Jan 31 03:52:07 crc kubenswrapper[4667]: E0131 03:52:07.718668 4667 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.111:6443: connect: connection refused" interval="400ms" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.749873 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.750703 4667 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec" exitCode=0 Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.750965 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.751210 4667 scope.go:117] "RemoveContainer" containerID="83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.751604 4667 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.751826 4667 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.752255 4667 status_manager.go:851] "Failed to get status for pod" podUID="b9115bbd-7154-4f77-8d1d-68b1e78a478a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.753641 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b9115bbd-7154-4f77-8d1d-68b1e78a478a","Type":"ContainerDied","Data":"d1d4914335748a23174fdc05a0dd600bc8dcdef4fdb0b0d895ae8a437efea966"} Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.753685 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1d4914335748a23174fdc05a0dd600bc8dcdef4fdb0b0d895ae8a437efea966" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.753746 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.756307 4667 status_manager.go:851] "Failed to get status for pod" podUID="b9115bbd-7154-4f77-8d1d-68b1e78a478a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.756984 4667 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.758089 4667 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.760433 4667 status_manager.go:851] "Failed to get status for pod" podUID="b9115bbd-7154-4f77-8d1d-68b1e78a478a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.761003 4667 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.761346 4667 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.770656 4667 scope.go:117] "RemoveContainer" containerID="69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.792896 4667 scope.go:117] "RemoveContainer" containerID="5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.809095 4667 scope.go:117] "RemoveContainer" containerID="6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.824977 4667 scope.go:117] "RemoveContainer" containerID="9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.839789 4667 scope.go:117] "RemoveContainer" containerID="3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.865107 4667 scope.go:117] "RemoveContainer" containerID="83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe" Jan 31 03:52:07 crc kubenswrapper[4667]: E0131 03:52:07.865630 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe\": container with ID starting with 83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe not found: ID does not exist" containerID="83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.865715 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe"} err="failed to get container status \"83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe\": rpc error: code = NotFound desc = could not find container \"83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe\": container with ID starting with 83d8dc7c4e37097d277fcddf5db7eef3d0b11612f212146eea9b7329e32ecbbe not found: ID does not exist" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.865810 4667 scope.go:117] "RemoveContainer" containerID="69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e" Jan 31 03:52:07 crc kubenswrapper[4667]: E0131 03:52:07.866312 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\": container with ID starting with 69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e not found: ID does not exist" containerID="69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.866347 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e"} err="failed to get container status \"69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\": rpc error: code = NotFound desc = could not find container \"69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e\": container with ID starting with 69db5031f36714acdf2c02293c0262a04a920ea9e96f734cff48469f5b44012e not found: ID does not exist" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.866372 4667 scope.go:117] "RemoveContainer" containerID="5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223" Jan 31 03:52:07 crc kubenswrapper[4667]: E0131 03:52:07.866777 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\": container with ID starting with 5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223 not found: ID does not exist" containerID="5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.866877 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223"} err="failed to get container status \"5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\": rpc error: code = NotFound desc = could not find container \"5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223\": container with ID starting with 5e78cc893c20531dcb586d5de334dcd2560a96da820ea5af2136681ae5647223 not found: ID does not exist" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.866951 4667 scope.go:117] "RemoveContainer" containerID="6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e" Jan 31 03:52:07 crc kubenswrapper[4667]: E0131 03:52:07.867523 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\": container with ID starting with 6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e not found: ID does not exist" containerID="6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.867592 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e"} err="failed to get container status \"6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\": rpc error: code = NotFound desc = could not find container \"6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e\": container with ID starting with 6349967c67e3afb7d22489fbf2522e7e0bf68235b15de89f50a43089661deb5e not found: ID does not exist" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.867611 4667 scope.go:117] "RemoveContainer" containerID="9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec" Jan 31 03:52:07 crc kubenswrapper[4667]: E0131 03:52:07.867899 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\": container with ID starting with 9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec not found: ID does not exist" containerID="9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.867929 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec"} err="failed to get container status \"9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\": rpc error: code = NotFound desc = could not find container \"9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec\": container with ID starting with 9f2094bc485e3483c89514e66858fcb4a8088f7662c061b65ba16a4ff45210ec not found: ID does not exist" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.867948 4667 scope.go:117] "RemoveContainer" containerID="3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c" Jan 31 03:52:07 crc kubenswrapper[4667]: E0131 03:52:07.868271 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\": container with ID starting with 3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c not found: ID does not exist" containerID="3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c" Jan 31 03:52:07 crc kubenswrapper[4667]: I0131 03:52:07.868300 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c"} err="failed to get container status \"3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\": rpc error: code = NotFound desc = could not find container \"3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c\": container with ID starting with 3ca3da6e3590939cbdfbaf35a08e05c623c4e5e35be9c45a9c77ef5f210dd89c not found: ID does not exist" Jan 31 03:52:08 crc kubenswrapper[4667]: E0131 03:52:08.121498 4667 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.111:6443: connect: connection refused" interval="800ms" Jan 31 03:52:08 crc kubenswrapper[4667]: E0131 03:52:08.922435 4667 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.111:6443: connect: connection refused" interval="1.6s" Jan 31 03:52:10 crc kubenswrapper[4667]: E0131 03:52:10.523224 4667 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.111:6443: connect: connection refused" interval="3.2s" Jan 31 03:52:13 crc kubenswrapper[4667]: E0131 03:52:13.724994 4667 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.111:6443: connect: connection refused" interval="6.4s" Jan 31 03:52:15 crc kubenswrapper[4667]: E0131 03:52:15.273080 4667 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.111:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188fb4651bdab650 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 03:52:04.9319092 +0000 UTC m=+248.448244519,LastTimestamp:2026-01-31 03:52:04.9319092 +0000 UTC m=+248.448244519,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 03:52:17 crc kubenswrapper[4667]: I0131 03:52:17.294413 4667 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Jan 31 03:52:17 crc kubenswrapper[4667]: I0131 03:52:17.296459 4667 status_manager.go:851] "Failed to get status for pod" podUID="b9115bbd-7154-4f77-8d1d-68b1e78a478a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Jan 31 03:52:17 crc kubenswrapper[4667]: E0131 03:52:17.531512 4667 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:52:17Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:52:17Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:52:17Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T03:52:17Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.111:6443: connect: connection refused" Jan 31 03:52:17 crc kubenswrapper[4667]: E0131 03:52:17.532592 4667 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.111:6443: connect: connection refused" Jan 31 03:52:17 crc kubenswrapper[4667]: E0131 03:52:17.533351 4667 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.111:6443: connect: connection refused" Jan 31 03:52:17 crc kubenswrapper[4667]: E0131 03:52:17.534645 4667 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.111:6443: connect: connection refused" Jan 31 03:52:17 crc kubenswrapper[4667]: E0131 03:52:17.535910 4667 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.111:6443: connect: connection refused" Jan 31 03:52:17 crc kubenswrapper[4667]: E0131 03:52:17.536109 4667 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 03:52:17 crc kubenswrapper[4667]: I0131 03:52:17.827893 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 31 03:52:17 crc kubenswrapper[4667]: I0131 03:52:17.827944 4667 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="a93540db06524b42380aa14ebbb64ece6e98cf8104ccc5930d58ae980e41d3fa" exitCode=1 Jan 31 03:52:17 crc kubenswrapper[4667]: I0131 03:52:17.827973 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"a93540db06524b42380aa14ebbb64ece6e98cf8104ccc5930d58ae980e41d3fa"} Jan 31 03:52:17 crc kubenswrapper[4667]: I0131 03:52:17.828437 4667 scope.go:117] "RemoveContainer" containerID="a93540db06524b42380aa14ebbb64ece6e98cf8104ccc5930d58ae980e41d3fa" Jan 31 03:52:17 crc kubenswrapper[4667]: I0131 03:52:17.829088 4667 status_manager.go:851] "Failed to get status for pod" podUID="b9115bbd-7154-4f77-8d1d-68b1e78a478a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Jan 31 03:52:17 crc kubenswrapper[4667]: I0131 03:52:17.829743 4667 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Jan 31 03:52:17 crc kubenswrapper[4667]: I0131 03:52:17.830222 4667 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Jan 31 03:52:18 crc kubenswrapper[4667]: I0131 03:52:18.850177 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 31 03:52:18 crc kubenswrapper[4667]: I0131 03:52:18.850255 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"27fba4de6fcc75778489c89f6c6372569530d5373120558915203a2effe6217f"} Jan 31 03:52:18 crc kubenswrapper[4667]: I0131 03:52:18.853690 4667 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Jan 31 03:52:18 crc kubenswrapper[4667]: I0131 03:52:18.856090 4667 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Jan 31 03:52:18 crc kubenswrapper[4667]: I0131 03:52:18.856579 4667 status_manager.go:851] "Failed to get status for pod" podUID="b9115bbd-7154-4f77-8d1d-68b1e78a478a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Jan 31 03:52:18 crc kubenswrapper[4667]: I0131 03:52:18.931621 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 03:52:19 crc kubenswrapper[4667]: I0131 03:52:19.281002 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:52:19 crc kubenswrapper[4667]: I0131 03:52:19.282911 4667 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Jan 31 03:52:19 crc kubenswrapper[4667]: I0131 03:52:19.286227 4667 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Jan 31 03:52:19 crc kubenswrapper[4667]: I0131 03:52:19.288105 4667 status_manager.go:851] "Failed to get status for pod" podUID="b9115bbd-7154-4f77-8d1d-68b1e78a478a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Jan 31 03:52:19 crc kubenswrapper[4667]: I0131 03:52:19.296778 4667 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c10ccda3-d9b2-4d01-897a-8498aee530b2" Jan 31 03:52:19 crc kubenswrapper[4667]: I0131 03:52:19.297057 4667 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c10ccda3-d9b2-4d01-897a-8498aee530b2" Jan 31 03:52:19 crc kubenswrapper[4667]: E0131 03:52:19.297471 4667 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:52:19 crc kubenswrapper[4667]: I0131 03:52:19.298243 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:52:19 crc kubenswrapper[4667]: W0131 03:52:19.330168 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-e99775804a544b2112178ec69840a3eaffb9562b8e85562355fbaa494c5dda38 WatchSource:0}: Error finding container e99775804a544b2112178ec69840a3eaffb9562b8e85562355fbaa494c5dda38: Status 404 returned error can't find the container with id e99775804a544b2112178ec69840a3eaffb9562b8e85562355fbaa494c5dda38 Jan 31 03:52:19 crc kubenswrapper[4667]: I0131 03:52:19.860470 4667 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="1cfa174647f66c552042292a5932fed92febf4eb5449b86e63d9cf155f1f74b8" exitCode=0 Jan 31 03:52:19 crc kubenswrapper[4667]: I0131 03:52:19.860600 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"1cfa174647f66c552042292a5932fed92febf4eb5449b86e63d9cf155f1f74b8"} Jan 31 03:52:19 crc kubenswrapper[4667]: I0131 03:52:19.860666 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e99775804a544b2112178ec69840a3eaffb9562b8e85562355fbaa494c5dda38"} Jan 31 03:52:19 crc kubenswrapper[4667]: I0131 03:52:19.861186 4667 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c10ccda3-d9b2-4d01-897a-8498aee530b2" Jan 31 03:52:19 crc kubenswrapper[4667]: I0131 03:52:19.861219 4667 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c10ccda3-d9b2-4d01-897a-8498aee530b2" Jan 31 03:52:19 crc kubenswrapper[4667]: E0131 03:52:19.862180 4667 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:52:19 crc kubenswrapper[4667]: I0131 03:52:19.862182 4667 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Jan 31 03:52:19 crc kubenswrapper[4667]: I0131 03:52:19.862784 4667 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Jan 31 03:52:19 crc kubenswrapper[4667]: I0131 03:52:19.863235 4667 status_manager.go:851] "Failed to get status for pod" podUID="b9115bbd-7154-4f77-8d1d-68b1e78a478a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.111:6443: connect: connection refused" Jan 31 03:52:20 crc kubenswrapper[4667]: E0131 03:52:20.126401 4667 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.111:6443: connect: connection refused" interval="7s" Jan 31 03:52:20 crc kubenswrapper[4667]: I0131 03:52:20.887272 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"89c3c03bf5fe00a96ebec7cb2e9bd46214676fc2048dc8ef5cae21e03e866bcb"} Jan 31 03:52:20 crc kubenswrapper[4667]: I0131 03:52:20.887317 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"267adf4f71130d1db331ac405159d202d8c5afddda8cf1cc60b3c06b7f84c05a"} Jan 31 03:52:20 crc kubenswrapper[4667]: I0131 03:52:20.887331 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"454110e7062da11f96a41befac0f39dd754686332fdce32131094b9165e951ef"} Jan 31 03:52:20 crc kubenswrapper[4667]: I0131 03:52:20.887342 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b382617a6513ad9977f3cf6d6c794e4afddc822e0491c3c796d2b51411d371ec"} Jan 31 03:52:21 crc kubenswrapper[4667]: I0131 03:52:21.895652 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"bb7dac6fc94787c00a31fb5063e526f89ca06c1aeee8af755fdb096f96e13328"} Jan 31 03:52:21 crc kubenswrapper[4667]: I0131 03:52:21.897020 4667 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c10ccda3-d9b2-4d01-897a-8498aee530b2" Jan 31 03:52:21 crc kubenswrapper[4667]: I0131 03:52:21.897142 4667 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c10ccda3-d9b2-4d01-897a-8498aee530b2" Jan 31 03:52:21 crc kubenswrapper[4667]: I0131 03:52:21.897506 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:52:23 crc kubenswrapper[4667]: I0131 03:52:23.547203 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 03:52:23 crc kubenswrapper[4667]: I0131 03:52:23.547521 4667 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 31 03:52:23 crc kubenswrapper[4667]: I0131 03:52:23.547590 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 31 03:52:24 crc kubenswrapper[4667]: I0131 03:52:24.299007 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:52:24 crc kubenswrapper[4667]: I0131 03:52:24.299685 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:52:24 crc kubenswrapper[4667]: I0131 03:52:24.306964 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:52:26 crc kubenswrapper[4667]: I0131 03:52:26.907967 4667 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:52:27 crc kubenswrapper[4667]: I0131 03:52:27.309110 4667 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="d91a0168-8279-4d43-890e-ae76081e68e6" Jan 31 03:52:27 crc kubenswrapper[4667]: I0131 03:52:27.925361 4667 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c10ccda3-d9b2-4d01-897a-8498aee530b2" Jan 31 03:52:27 crc kubenswrapper[4667]: I0131 03:52:27.926343 4667 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c10ccda3-d9b2-4d01-897a-8498aee530b2" Jan 31 03:52:27 crc kubenswrapper[4667]: I0131 03:52:27.928170 4667 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="d91a0168-8279-4d43-890e-ae76081e68e6" Jan 31 03:52:33 crc kubenswrapper[4667]: I0131 03:52:33.557338 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 03:52:33 crc kubenswrapper[4667]: I0131 03:52:33.565798 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 03:52:36 crc kubenswrapper[4667]: I0131 03:52:36.446060 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 31 03:52:36 crc kubenswrapper[4667]: I0131 03:52:36.843601 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 31 03:52:36 crc kubenswrapper[4667]: I0131 03:52:36.870646 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 31 03:52:37 crc kubenswrapper[4667]: I0131 03:52:37.070220 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 31 03:52:37 crc kubenswrapper[4667]: I0131 03:52:37.222023 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 31 03:52:38 crc kubenswrapper[4667]: I0131 03:52:38.107092 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 31 03:52:38 crc kubenswrapper[4667]: I0131 03:52:38.232897 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 31 03:52:38 crc kubenswrapper[4667]: I0131 03:52:38.482000 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 31 03:52:38 crc kubenswrapper[4667]: I0131 03:52:38.624173 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 31 03:52:38 crc kubenswrapper[4667]: I0131 03:52:38.664404 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 31 03:52:38 crc kubenswrapper[4667]: I0131 03:52:38.718058 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 31 03:52:39 crc kubenswrapper[4667]: I0131 03:52:39.064880 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 31 03:52:39 crc kubenswrapper[4667]: I0131 03:52:39.091119 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 31 03:52:39 crc kubenswrapper[4667]: I0131 03:52:39.187629 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 31 03:52:39 crc kubenswrapper[4667]: I0131 03:52:39.278595 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 31 03:52:39 crc kubenswrapper[4667]: I0131 03:52:39.401011 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 31 03:52:39 crc kubenswrapper[4667]: I0131 03:52:39.423220 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 31 03:52:39 crc kubenswrapper[4667]: I0131 03:52:39.448435 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 31 03:52:39 crc kubenswrapper[4667]: I0131 03:52:39.686414 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 03:52:39 crc kubenswrapper[4667]: I0131 03:52:39.714288 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 31 03:52:39 crc kubenswrapper[4667]: I0131 03:52:39.835962 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 31 03:52:39 crc kubenswrapper[4667]: I0131 03:52:39.902970 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 31 03:52:39 crc kubenswrapper[4667]: I0131 03:52:39.950775 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 31 03:52:40 crc kubenswrapper[4667]: I0131 03:52:40.354942 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 31 03:52:40 crc kubenswrapper[4667]: I0131 03:52:40.418746 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 31 03:52:40 crc kubenswrapper[4667]: I0131 03:52:40.480364 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 31 03:52:40 crc kubenswrapper[4667]: I0131 03:52:40.489995 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 31 03:52:40 crc kubenswrapper[4667]: I0131 03:52:40.550784 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 31 03:52:40 crc kubenswrapper[4667]: I0131 03:52:40.557567 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 31 03:52:40 crc kubenswrapper[4667]: I0131 03:52:40.612447 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 31 03:52:40 crc kubenswrapper[4667]: I0131 03:52:40.662925 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 31 03:52:40 crc kubenswrapper[4667]: I0131 03:52:40.730917 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 31 03:52:40 crc kubenswrapper[4667]: I0131 03:52:40.747411 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 31 03:52:40 crc kubenswrapper[4667]: I0131 03:52:40.772389 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 31 03:52:40 crc kubenswrapper[4667]: I0131 03:52:40.794714 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 31 03:52:40 crc kubenswrapper[4667]: I0131 03:52:40.820058 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 31 03:52:40 crc kubenswrapper[4667]: I0131 03:52:40.869759 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 31 03:52:40 crc kubenswrapper[4667]: I0131 03:52:40.908171 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 31 03:52:40 crc kubenswrapper[4667]: I0131 03:52:40.948987 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 31 03:52:40 crc kubenswrapper[4667]: I0131 03:52:40.979746 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 31 03:52:41 crc kubenswrapper[4667]: I0131 03:52:41.000917 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 03:52:41 crc kubenswrapper[4667]: I0131 03:52:41.177348 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 31 03:52:41 crc kubenswrapper[4667]: I0131 03:52:41.191382 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 31 03:52:41 crc kubenswrapper[4667]: I0131 03:52:41.230735 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 31 03:52:41 crc kubenswrapper[4667]: I0131 03:52:41.260668 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 31 03:52:41 crc kubenswrapper[4667]: I0131 03:52:41.288306 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 31 03:52:41 crc kubenswrapper[4667]: I0131 03:52:41.296953 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 31 03:52:41 crc kubenswrapper[4667]: I0131 03:52:41.313222 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 31 03:52:41 crc kubenswrapper[4667]: I0131 03:52:41.531239 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 31 03:52:41 crc kubenswrapper[4667]: I0131 03:52:41.541519 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 31 03:52:41 crc kubenswrapper[4667]: I0131 03:52:41.546722 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 31 03:52:41 crc kubenswrapper[4667]: I0131 03:52:41.547342 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 31 03:52:41 crc kubenswrapper[4667]: I0131 03:52:41.698160 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 31 03:52:41 crc kubenswrapper[4667]: I0131 03:52:41.709765 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 31 03:52:41 crc kubenswrapper[4667]: I0131 03:52:41.763043 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 31 03:52:41 crc kubenswrapper[4667]: I0131 03:52:41.856404 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 31 03:52:41 crc kubenswrapper[4667]: I0131 03:52:41.856600 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 31 03:52:41 crc kubenswrapper[4667]: I0131 03:52:41.861594 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 31 03:52:41 crc kubenswrapper[4667]: I0131 03:52:41.908401 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 31 03:52:41 crc kubenswrapper[4667]: I0131 03:52:41.959891 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 31 03:52:42 crc kubenswrapper[4667]: I0131 03:52:42.093306 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 31 03:52:42 crc kubenswrapper[4667]: I0131 03:52:42.161568 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 03:52:42 crc kubenswrapper[4667]: I0131 03:52:42.305827 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 31 03:52:42 crc kubenswrapper[4667]: I0131 03:52:42.328206 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 31 03:52:42 crc kubenswrapper[4667]: I0131 03:52:42.369048 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 31 03:52:42 crc kubenswrapper[4667]: I0131 03:52:42.449515 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 31 03:52:42 crc kubenswrapper[4667]: I0131 03:52:42.487525 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 31 03:52:42 crc kubenswrapper[4667]: I0131 03:52:42.499454 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 31 03:52:42 crc kubenswrapper[4667]: I0131 03:52:42.560967 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 31 03:52:42 crc kubenswrapper[4667]: I0131 03:52:42.575456 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 03:52:42 crc kubenswrapper[4667]: I0131 03:52:42.628514 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 03:52:42 crc kubenswrapper[4667]: I0131 03:52:42.634109 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 31 03:52:42 crc kubenswrapper[4667]: I0131 03:52:42.701981 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 31 03:52:42 crc kubenswrapper[4667]: I0131 03:52:42.702013 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 31 03:52:42 crc kubenswrapper[4667]: I0131 03:52:42.703373 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 31 03:52:42 crc kubenswrapper[4667]: I0131 03:52:42.798620 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 31 03:52:42 crc kubenswrapper[4667]: I0131 03:52:42.812998 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 31 03:52:42 crc kubenswrapper[4667]: I0131 03:52:42.816630 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 31 03:52:43 crc kubenswrapper[4667]: I0131 03:52:43.043124 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 31 03:52:43 crc kubenswrapper[4667]: I0131 03:52:43.058114 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 31 03:52:43 crc kubenswrapper[4667]: I0131 03:52:43.119367 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 31 03:52:43 crc kubenswrapper[4667]: I0131 03:52:43.127861 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 31 03:52:43 crc kubenswrapper[4667]: I0131 03:52:43.241235 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 31 03:52:43 crc kubenswrapper[4667]: I0131 03:52:43.259986 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 31 03:52:43 crc kubenswrapper[4667]: I0131 03:52:43.367262 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 31 03:52:43 crc kubenswrapper[4667]: I0131 03:52:43.375069 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 31 03:52:43 crc kubenswrapper[4667]: I0131 03:52:43.463635 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 31 03:52:43 crc kubenswrapper[4667]: I0131 03:52:43.550477 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 31 03:52:43 crc kubenswrapper[4667]: I0131 03:52:43.581942 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 31 03:52:43 crc kubenswrapper[4667]: I0131 03:52:43.641417 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 31 03:52:43 crc kubenswrapper[4667]: I0131 03:52:43.651767 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 31 03:52:43 crc kubenswrapper[4667]: I0131 03:52:43.655721 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 31 03:52:43 crc kubenswrapper[4667]: I0131 03:52:43.693826 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 31 03:52:43 crc kubenswrapper[4667]: I0131 03:52:43.716889 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 31 03:52:43 crc kubenswrapper[4667]: I0131 03:52:43.735416 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 31 03:52:43 crc kubenswrapper[4667]: I0131 03:52:43.745442 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 03:52:43 crc kubenswrapper[4667]: I0131 03:52:43.775124 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 31 03:52:43 crc kubenswrapper[4667]: I0131 03:52:43.879192 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 31 03:52:43 crc kubenswrapper[4667]: I0131 03:52:43.938132 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 31 03:52:43 crc kubenswrapper[4667]: I0131 03:52:43.977421 4667 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.036352 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.107809 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.110158 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.111176 4667 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.114721 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=40.114707931 podStartE2EDuration="40.114707931s" podCreationTimestamp="2026-01-31 03:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:52:26.944184005 +0000 UTC m=+270.460519344" watchObservedRunningTime="2026-01-31 03:52:44.114707931 +0000 UTC m=+287.631043230" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.115252 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.115297 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7b485cc687-2l6jx","openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 03:52:44 crc kubenswrapper[4667]: E0131 03:52:44.115474 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9115bbd-7154-4f77-8d1d-68b1e78a478a" containerName="installer" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.115491 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9115bbd-7154-4f77-8d1d-68b1e78a478a" containerName="installer" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.115585 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9115bbd-7154-4f77-8d1d-68b1e78a478a" containerName="installer" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.116092 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.116802 4667 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c10ccda3-d9b2-4d01-897a-8498aee530b2" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.116919 4667 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c10ccda3-d9b2-4d01-897a-8498aee530b2" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.122857 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.123135 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.123288 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.123536 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.123582 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.123305 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.123897 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.124147 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.124312 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.123549 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.124325 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.124557 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.124914 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.129199 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.131183 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.131756 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.133991 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.135465 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.142497 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.163222 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=18.163206909 podStartE2EDuration="18.163206909s" podCreationTimestamp="2026-01-31 03:52:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:52:44.160212829 +0000 UTC m=+287.676548128" watchObservedRunningTime="2026-01-31 03:52:44.163206909 +0000 UTC m=+287.679542208" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.209576 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.217770 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5892b5c0-3b90-4625-9d68-cadd28308327-v4-0-config-system-router-certs\") pod \"oauth-openshift-7b485cc687-2l6jx\" (UID: \"5892b5c0-3b90-4625-9d68-cadd28308327\") " pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.217831 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5892b5c0-3b90-4625-9d68-cadd28308327-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7b485cc687-2l6jx\" (UID: \"5892b5c0-3b90-4625-9d68-cadd28308327\") " pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.217899 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5892b5c0-3b90-4625-9d68-cadd28308327-v4-0-config-user-template-login\") pod \"oauth-openshift-7b485cc687-2l6jx\" (UID: \"5892b5c0-3b90-4625-9d68-cadd28308327\") " pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.217932 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5892b5c0-3b90-4625-9d68-cadd28308327-audit-policies\") pod \"oauth-openshift-7b485cc687-2l6jx\" (UID: \"5892b5c0-3b90-4625-9d68-cadd28308327\") " pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.217975 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5892b5c0-3b90-4625-9d68-cadd28308327-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7b485cc687-2l6jx\" (UID: \"5892b5c0-3b90-4625-9d68-cadd28308327\") " pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.218054 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5892b5c0-3b90-4625-9d68-cadd28308327-v4-0-config-system-session\") pod \"oauth-openshift-7b485cc687-2l6jx\" (UID: \"5892b5c0-3b90-4625-9d68-cadd28308327\") " pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.218089 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5892b5c0-3b90-4625-9d68-cadd28308327-audit-dir\") pod \"oauth-openshift-7b485cc687-2l6jx\" (UID: \"5892b5c0-3b90-4625-9d68-cadd28308327\") " pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.218134 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5892b5c0-3b90-4625-9d68-cadd28308327-v4-0-config-system-service-ca\") pod \"oauth-openshift-7b485cc687-2l6jx\" (UID: \"5892b5c0-3b90-4625-9d68-cadd28308327\") " pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.218157 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5892b5c0-3b90-4625-9d68-cadd28308327-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7b485cc687-2l6jx\" (UID: \"5892b5c0-3b90-4625-9d68-cadd28308327\") " pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.218181 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5892b5c0-3b90-4625-9d68-cadd28308327-v4-0-config-user-template-error\") pod \"oauth-openshift-7b485cc687-2l6jx\" (UID: \"5892b5c0-3b90-4625-9d68-cadd28308327\") " pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.218210 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5892b5c0-3b90-4625-9d68-cadd28308327-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7b485cc687-2l6jx\" (UID: \"5892b5c0-3b90-4625-9d68-cadd28308327\") " pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.218240 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69mcj\" (UniqueName: \"kubernetes.io/projected/5892b5c0-3b90-4625-9d68-cadd28308327-kube-api-access-69mcj\") pod \"oauth-openshift-7b485cc687-2l6jx\" (UID: \"5892b5c0-3b90-4625-9d68-cadd28308327\") " pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.218276 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5892b5c0-3b90-4625-9d68-cadd28308327-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7b485cc687-2l6jx\" (UID: \"5892b5c0-3b90-4625-9d68-cadd28308327\") " pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.218304 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5892b5c0-3b90-4625-9d68-cadd28308327-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7b485cc687-2l6jx\" (UID: \"5892b5c0-3b90-4625-9d68-cadd28308327\") " pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.241538 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.318759 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5892b5c0-3b90-4625-9d68-cadd28308327-v4-0-config-system-session\") pod \"oauth-openshift-7b485cc687-2l6jx\" (UID: \"5892b5c0-3b90-4625-9d68-cadd28308327\") " pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.318796 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5892b5c0-3b90-4625-9d68-cadd28308327-audit-dir\") pod \"oauth-openshift-7b485cc687-2l6jx\" (UID: \"5892b5c0-3b90-4625-9d68-cadd28308327\") " pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.318858 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5892b5c0-3b90-4625-9d68-cadd28308327-v4-0-config-system-service-ca\") pod \"oauth-openshift-7b485cc687-2l6jx\" (UID: \"5892b5c0-3b90-4625-9d68-cadd28308327\") " pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.318882 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5892b5c0-3b90-4625-9d68-cadd28308327-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7b485cc687-2l6jx\" (UID: \"5892b5c0-3b90-4625-9d68-cadd28308327\") " pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.318900 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5892b5c0-3b90-4625-9d68-cadd28308327-v4-0-config-user-template-error\") pod \"oauth-openshift-7b485cc687-2l6jx\" (UID: \"5892b5c0-3b90-4625-9d68-cadd28308327\") " pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.318925 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5892b5c0-3b90-4625-9d68-cadd28308327-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7b485cc687-2l6jx\" (UID: \"5892b5c0-3b90-4625-9d68-cadd28308327\") " pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.318951 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69mcj\" (UniqueName: \"kubernetes.io/projected/5892b5c0-3b90-4625-9d68-cadd28308327-kube-api-access-69mcj\") pod \"oauth-openshift-7b485cc687-2l6jx\" (UID: \"5892b5c0-3b90-4625-9d68-cadd28308327\") " pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.318971 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5892b5c0-3b90-4625-9d68-cadd28308327-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7b485cc687-2l6jx\" (UID: \"5892b5c0-3b90-4625-9d68-cadd28308327\") " pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.318996 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5892b5c0-3b90-4625-9d68-cadd28308327-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7b485cc687-2l6jx\" (UID: \"5892b5c0-3b90-4625-9d68-cadd28308327\") " pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.319026 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5892b5c0-3b90-4625-9d68-cadd28308327-audit-dir\") pod \"oauth-openshift-7b485cc687-2l6jx\" (UID: \"5892b5c0-3b90-4625-9d68-cadd28308327\") " pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.319040 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5892b5c0-3b90-4625-9d68-cadd28308327-v4-0-config-system-router-certs\") pod \"oauth-openshift-7b485cc687-2l6jx\" (UID: \"5892b5c0-3b90-4625-9d68-cadd28308327\") " pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.319080 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5892b5c0-3b90-4625-9d68-cadd28308327-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7b485cc687-2l6jx\" (UID: \"5892b5c0-3b90-4625-9d68-cadd28308327\") " pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.319123 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5892b5c0-3b90-4625-9d68-cadd28308327-audit-policies\") pod \"oauth-openshift-7b485cc687-2l6jx\" (UID: \"5892b5c0-3b90-4625-9d68-cadd28308327\") " pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.319144 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5892b5c0-3b90-4625-9d68-cadd28308327-v4-0-config-user-template-login\") pod \"oauth-openshift-7b485cc687-2l6jx\" (UID: \"5892b5c0-3b90-4625-9d68-cadd28308327\") " pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.319204 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5892b5c0-3b90-4625-9d68-cadd28308327-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7b485cc687-2l6jx\" (UID: \"5892b5c0-3b90-4625-9d68-cadd28308327\") " pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.320388 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5892b5c0-3b90-4625-9d68-cadd28308327-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7b485cc687-2l6jx\" (UID: \"5892b5c0-3b90-4625-9d68-cadd28308327\") " pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.321331 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5892b5c0-3b90-4625-9d68-cadd28308327-audit-policies\") pod \"oauth-openshift-7b485cc687-2l6jx\" (UID: \"5892b5c0-3b90-4625-9d68-cadd28308327\") " pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.321975 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5892b5c0-3b90-4625-9d68-cadd28308327-v4-0-config-system-service-ca\") pod \"oauth-openshift-7b485cc687-2l6jx\" (UID: \"5892b5c0-3b90-4625-9d68-cadd28308327\") " pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.324257 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5892b5c0-3b90-4625-9d68-cadd28308327-v4-0-config-system-session\") pod \"oauth-openshift-7b485cc687-2l6jx\" (UID: \"5892b5c0-3b90-4625-9d68-cadd28308327\") " pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.324277 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5892b5c0-3b90-4625-9d68-cadd28308327-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7b485cc687-2l6jx\" (UID: \"5892b5c0-3b90-4625-9d68-cadd28308327\") " pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.324814 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5892b5c0-3b90-4625-9d68-cadd28308327-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7b485cc687-2l6jx\" (UID: \"5892b5c0-3b90-4625-9d68-cadd28308327\") " pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.324823 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5892b5c0-3b90-4625-9d68-cadd28308327-v4-0-config-user-template-error\") pod \"oauth-openshift-7b485cc687-2l6jx\" (UID: \"5892b5c0-3b90-4625-9d68-cadd28308327\") " pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.326112 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.326165 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5892b5c0-3b90-4625-9d68-cadd28308327-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7b485cc687-2l6jx\" (UID: \"5892b5c0-3b90-4625-9d68-cadd28308327\") " pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.326387 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5892b5c0-3b90-4625-9d68-cadd28308327-v4-0-config-user-template-login\") pod \"oauth-openshift-7b485cc687-2l6jx\" (UID: \"5892b5c0-3b90-4625-9d68-cadd28308327\") " pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.326660 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5892b5c0-3b90-4625-9d68-cadd28308327-v4-0-config-system-router-certs\") pod \"oauth-openshift-7b485cc687-2l6jx\" (UID: \"5892b5c0-3b90-4625-9d68-cadd28308327\") " pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.328118 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5892b5c0-3b90-4625-9d68-cadd28308327-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7b485cc687-2l6jx\" (UID: \"5892b5c0-3b90-4625-9d68-cadd28308327\") " pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.330642 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5892b5c0-3b90-4625-9d68-cadd28308327-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7b485cc687-2l6jx\" (UID: \"5892b5c0-3b90-4625-9d68-cadd28308327\") " pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.339632 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69mcj\" (UniqueName: \"kubernetes.io/projected/5892b5c0-3b90-4625-9d68-cadd28308327-kube-api-access-69mcj\") pod \"oauth-openshift-7b485cc687-2l6jx\" (UID: \"5892b5c0-3b90-4625-9d68-cadd28308327\") " pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.438913 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.461610 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.573917 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.608847 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.646986 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7b485cc687-2l6jx"] Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.794493 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.814713 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 31 03:52:44 crc kubenswrapper[4667]: I0131 03:52:44.970489 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 31 03:52:45 crc kubenswrapper[4667]: I0131 03:52:45.025967 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" event={"ID":"5892b5c0-3b90-4625-9d68-cadd28308327","Type":"ContainerStarted","Data":"75fbe0c72f5a56ba1ce7227d648a035ecba03726285ba33de9c83873e629dfcf"} Jan 31 03:52:45 crc kubenswrapper[4667]: I0131 03:52:45.026061 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" event={"ID":"5892b5c0-3b90-4625-9d68-cadd28308327","Type":"ContainerStarted","Data":"aa48b81a442c384d934959fa26ffa102dd2d13db989f997d09e2858379bc6977"} Jan 31 03:52:45 crc kubenswrapper[4667]: I0131 03:52:45.026447 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:45 crc kubenswrapper[4667]: I0131 03:52:45.028603 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 31 03:52:45 crc kubenswrapper[4667]: I0131 03:52:45.049196 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" podStartSLOduration=69.049172672 podStartE2EDuration="1m9.049172672s" podCreationTimestamp="2026-01-31 03:51:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:52:45.044510928 +0000 UTC m=+288.560846227" watchObservedRunningTime="2026-01-31 03:52:45.049172672 +0000 UTC m=+288.565507971" Jan 31 03:52:45 crc kubenswrapper[4667]: I0131 03:52:45.154732 4667 patch_prober.go:28] interesting pod/oauth-openshift-7b485cc687-2l6jx container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": read tcp 10.217.0.2:53074->10.217.0.56:6443: read: connection reset by peer" start-of-body= Jan 31 03:52:45 crc kubenswrapper[4667]: I0131 03:52:45.154781 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" podUID="5892b5c0-3b90-4625-9d68-cadd28308327" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": read tcp 10.217.0.2:53074->10.217.0.56:6443: read: connection reset by peer" Jan 31 03:52:45 crc kubenswrapper[4667]: I0131 03:52:45.162257 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 31 03:52:45 crc kubenswrapper[4667]: I0131 03:52:45.196819 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 31 03:52:45 crc kubenswrapper[4667]: I0131 03:52:45.223444 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 31 03:52:45 crc kubenswrapper[4667]: I0131 03:52:45.307380 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 31 03:52:45 crc kubenswrapper[4667]: I0131 03:52:45.399692 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 31 03:52:45 crc kubenswrapper[4667]: I0131 03:52:45.402380 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 31 03:52:45 crc kubenswrapper[4667]: I0131 03:52:45.446371 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 31 03:52:45 crc kubenswrapper[4667]: I0131 03:52:45.454898 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 31 03:52:45 crc kubenswrapper[4667]: I0131 03:52:45.555459 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 03:52:45 crc kubenswrapper[4667]: I0131 03:52:45.569945 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 31 03:52:45 crc kubenswrapper[4667]: I0131 03:52:45.575645 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 31 03:52:45 crc kubenswrapper[4667]: I0131 03:52:45.682742 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 31 03:52:45 crc kubenswrapper[4667]: I0131 03:52:45.682960 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 31 03:52:45 crc kubenswrapper[4667]: I0131 03:52:45.729936 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 31 03:52:45 crc kubenswrapper[4667]: I0131 03:52:45.757386 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 31 03:52:45 crc kubenswrapper[4667]: I0131 03:52:45.840490 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 31 03:52:46 crc kubenswrapper[4667]: I0131 03:52:46.015964 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 31 03:52:46 crc kubenswrapper[4667]: I0131 03:52:46.033583 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-7b485cc687-2l6jx_5892b5c0-3b90-4625-9d68-cadd28308327/oauth-openshift/0.log" Jan 31 03:52:46 crc kubenswrapper[4667]: I0131 03:52:46.033633 4667 generic.go:334] "Generic (PLEG): container finished" podID="5892b5c0-3b90-4625-9d68-cadd28308327" containerID="75fbe0c72f5a56ba1ce7227d648a035ecba03726285ba33de9c83873e629dfcf" exitCode=255 Jan 31 03:52:46 crc kubenswrapper[4667]: I0131 03:52:46.033698 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" event={"ID":"5892b5c0-3b90-4625-9d68-cadd28308327","Type":"ContainerDied","Data":"75fbe0c72f5a56ba1ce7227d648a035ecba03726285ba33de9c83873e629dfcf"} Jan 31 03:52:46 crc kubenswrapper[4667]: I0131 03:52:46.034245 4667 scope.go:117] "RemoveContainer" containerID="75fbe0c72f5a56ba1ce7227d648a035ecba03726285ba33de9c83873e629dfcf" Jan 31 03:52:46 crc kubenswrapper[4667]: I0131 03:52:46.092057 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 31 03:52:46 crc kubenswrapper[4667]: I0131 03:52:46.200011 4667 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 31 03:52:46 crc kubenswrapper[4667]: I0131 03:52:46.255489 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 31 03:52:46 crc kubenswrapper[4667]: I0131 03:52:46.406432 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 31 03:52:46 crc kubenswrapper[4667]: I0131 03:52:46.432446 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 31 03:52:46 crc kubenswrapper[4667]: I0131 03:52:46.483863 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 31 03:52:46 crc kubenswrapper[4667]: I0131 03:52:46.540873 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 31 03:52:46 crc kubenswrapper[4667]: I0131 03:52:46.659183 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 31 03:52:46 crc kubenswrapper[4667]: I0131 03:52:46.692463 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 31 03:52:46 crc kubenswrapper[4667]: I0131 03:52:46.700416 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 31 03:52:46 crc kubenswrapper[4667]: I0131 03:52:46.723171 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 31 03:52:46 crc kubenswrapper[4667]: I0131 03:52:46.731725 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 31 03:52:46 crc kubenswrapper[4667]: I0131 03:52:46.857124 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 31 03:52:46 crc kubenswrapper[4667]: I0131 03:52:46.903218 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 31 03:52:46 crc kubenswrapper[4667]: I0131 03:52:46.952201 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 31 03:52:47 crc kubenswrapper[4667]: I0131 03:52:47.018093 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 31 03:52:47 crc kubenswrapper[4667]: I0131 03:52:47.025586 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 31 03:52:47 crc kubenswrapper[4667]: I0131 03:52:47.041317 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-7b485cc687-2l6jx_5892b5c0-3b90-4625-9d68-cadd28308327/oauth-openshift/1.log" Jan 31 03:52:47 crc kubenswrapper[4667]: I0131 03:52:47.042173 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-7b485cc687-2l6jx_5892b5c0-3b90-4625-9d68-cadd28308327/oauth-openshift/0.log" Jan 31 03:52:47 crc kubenswrapper[4667]: I0131 03:52:47.042237 4667 generic.go:334] "Generic (PLEG): container finished" podID="5892b5c0-3b90-4625-9d68-cadd28308327" containerID="f5da4f4d8da26aab93b0c89dfe86657e1e1cae5a927d84028af50b3dfbb86474" exitCode=255 Jan 31 03:52:47 crc kubenswrapper[4667]: I0131 03:52:47.042276 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" event={"ID":"5892b5c0-3b90-4625-9d68-cadd28308327","Type":"ContainerDied","Data":"f5da4f4d8da26aab93b0c89dfe86657e1e1cae5a927d84028af50b3dfbb86474"} Jan 31 03:52:47 crc kubenswrapper[4667]: I0131 03:52:47.042339 4667 scope.go:117] "RemoveContainer" containerID="75fbe0c72f5a56ba1ce7227d648a035ecba03726285ba33de9c83873e629dfcf" Jan 31 03:52:47 crc kubenswrapper[4667]: I0131 03:52:47.043467 4667 scope.go:117] "RemoveContainer" containerID="f5da4f4d8da26aab93b0c89dfe86657e1e1cae5a927d84028af50b3dfbb86474" Jan 31 03:52:47 crc kubenswrapper[4667]: E0131 03:52:47.043865 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-7b485cc687-2l6jx_openshift-authentication(5892b5c0-3b90-4625-9d68-cadd28308327)\"" pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" podUID="5892b5c0-3b90-4625-9d68-cadd28308327" Jan 31 03:52:47 crc kubenswrapper[4667]: I0131 03:52:47.066525 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 31 03:52:47 crc kubenswrapper[4667]: I0131 03:52:47.070123 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 31 03:52:47 crc kubenswrapper[4667]: I0131 03:52:47.092655 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 31 03:52:47 crc kubenswrapper[4667]: I0131 03:52:47.092751 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 31 03:52:47 crc kubenswrapper[4667]: I0131 03:52:47.092819 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 31 03:52:47 crc kubenswrapper[4667]: I0131 03:52:47.150947 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 31 03:52:47 crc kubenswrapper[4667]: I0131 03:52:47.151015 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 31 03:52:47 crc kubenswrapper[4667]: I0131 03:52:47.153037 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 31 03:52:47 crc kubenswrapper[4667]: I0131 03:52:47.243273 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 31 03:52:47 crc kubenswrapper[4667]: I0131 03:52:47.304684 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 31 03:52:47 crc kubenswrapper[4667]: I0131 03:52:47.330407 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 31 03:52:47 crc kubenswrapper[4667]: I0131 03:52:47.344419 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 31 03:52:47 crc kubenswrapper[4667]: I0131 03:52:47.349727 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 31 03:52:47 crc kubenswrapper[4667]: I0131 03:52:47.360354 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 31 03:52:47 crc kubenswrapper[4667]: I0131 03:52:47.378660 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 31 03:52:47 crc kubenswrapper[4667]: I0131 03:52:47.408891 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 31 03:52:47 crc kubenswrapper[4667]: I0131 03:52:47.486678 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 31 03:52:47 crc kubenswrapper[4667]: I0131 03:52:47.595272 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 31 03:52:47 crc kubenswrapper[4667]: I0131 03:52:47.635179 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 31 03:52:47 crc kubenswrapper[4667]: I0131 03:52:47.671668 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 31 03:52:47 crc kubenswrapper[4667]: I0131 03:52:47.680672 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 31 03:52:47 crc kubenswrapper[4667]: I0131 03:52:47.801698 4667 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 31 03:52:47 crc kubenswrapper[4667]: I0131 03:52:47.828933 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 31 03:52:47 crc kubenswrapper[4667]: I0131 03:52:47.867486 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 31 03:52:47 crc kubenswrapper[4667]: I0131 03:52:47.924227 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 31 03:52:48 crc kubenswrapper[4667]: I0131 03:52:48.013063 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 31 03:52:48 crc kubenswrapper[4667]: I0131 03:52:48.029409 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 31 03:52:48 crc kubenswrapper[4667]: I0131 03:52:48.049126 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-7b485cc687-2l6jx_5892b5c0-3b90-4625-9d68-cadd28308327/oauth-openshift/1.log" Jan 31 03:52:48 crc kubenswrapper[4667]: I0131 03:52:48.049613 4667 scope.go:117] "RemoveContainer" containerID="f5da4f4d8da26aab93b0c89dfe86657e1e1cae5a927d84028af50b3dfbb86474" Jan 31 03:52:48 crc kubenswrapper[4667]: E0131 03:52:48.049934 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-7b485cc687-2l6jx_openshift-authentication(5892b5c0-3b90-4625-9d68-cadd28308327)\"" pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" podUID="5892b5c0-3b90-4625-9d68-cadd28308327" Jan 31 03:52:48 crc kubenswrapper[4667]: I0131 03:52:48.080651 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 31 03:52:48 crc kubenswrapper[4667]: I0131 03:52:48.087577 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 31 03:52:48 crc kubenswrapper[4667]: I0131 03:52:48.107166 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 31 03:52:48 crc kubenswrapper[4667]: I0131 03:52:48.282158 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 31 03:52:48 crc kubenswrapper[4667]: I0131 03:52:48.309937 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 31 03:52:48 crc kubenswrapper[4667]: I0131 03:52:48.310348 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 31 03:52:48 crc kubenswrapper[4667]: I0131 03:52:48.397021 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 03:52:48 crc kubenswrapper[4667]: I0131 03:52:48.433196 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 31 03:52:48 crc kubenswrapper[4667]: I0131 03:52:48.453965 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 31 03:52:48 crc kubenswrapper[4667]: I0131 03:52:48.487589 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 31 03:52:48 crc kubenswrapper[4667]: I0131 03:52:48.492642 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 31 03:52:48 crc kubenswrapper[4667]: I0131 03:52:48.600383 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 31 03:52:48 crc kubenswrapper[4667]: I0131 03:52:48.605767 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 31 03:52:48 crc kubenswrapper[4667]: I0131 03:52:48.637440 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 31 03:52:48 crc kubenswrapper[4667]: I0131 03:52:48.759798 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 31 03:52:48 crc kubenswrapper[4667]: I0131 03:52:48.780083 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 31 03:52:48 crc kubenswrapper[4667]: I0131 03:52:48.959006 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 31 03:52:48 crc kubenswrapper[4667]: I0131 03:52:48.972940 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 31 03:52:48 crc kubenswrapper[4667]: I0131 03:52:48.974085 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 03:52:49 crc kubenswrapper[4667]: I0131 03:52:49.122403 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 31 03:52:49 crc kubenswrapper[4667]: I0131 03:52:49.126115 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 31 03:52:49 crc kubenswrapper[4667]: I0131 03:52:49.196116 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 31 03:52:49 crc kubenswrapper[4667]: I0131 03:52:49.231047 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 31 03:52:49 crc kubenswrapper[4667]: I0131 03:52:49.351768 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 31 03:52:49 crc kubenswrapper[4667]: I0131 03:52:49.401704 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 31 03:52:49 crc kubenswrapper[4667]: I0131 03:52:49.461162 4667 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 03:52:49 crc kubenswrapper[4667]: I0131 03:52:49.461574 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://2c6c729a2139023186db8d47849beefece1b15925f5567dae79c70255b22dea9" gracePeriod=5 Jan 31 03:52:49 crc kubenswrapper[4667]: I0131 03:52:49.479626 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 31 03:52:49 crc kubenswrapper[4667]: I0131 03:52:49.482200 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 31 03:52:49 crc kubenswrapper[4667]: I0131 03:52:49.547418 4667 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 31 03:52:49 crc kubenswrapper[4667]: I0131 03:52:49.716329 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 31 03:52:49 crc kubenswrapper[4667]: I0131 03:52:49.730629 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 31 03:52:49 crc kubenswrapper[4667]: I0131 03:52:49.764927 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 31 03:52:49 crc kubenswrapper[4667]: I0131 03:52:49.836393 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 03:52:49 crc kubenswrapper[4667]: I0131 03:52:49.867064 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 31 03:52:49 crc kubenswrapper[4667]: I0131 03:52:49.872004 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 31 03:52:49 crc kubenswrapper[4667]: I0131 03:52:49.942096 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 31 03:52:49 crc kubenswrapper[4667]: I0131 03:52:49.969790 4667 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 31 03:52:50 crc kubenswrapper[4667]: I0131 03:52:50.114135 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 31 03:52:50 crc kubenswrapper[4667]: I0131 03:52:50.247225 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 31 03:52:50 crc kubenswrapper[4667]: I0131 03:52:50.272108 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 31 03:52:50 crc kubenswrapper[4667]: I0131 03:52:50.328899 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 31 03:52:50 crc kubenswrapper[4667]: I0131 03:52:50.462215 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 31 03:52:50 crc kubenswrapper[4667]: I0131 03:52:50.500131 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 31 03:52:50 crc kubenswrapper[4667]: I0131 03:52:50.671132 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 31 03:52:50 crc kubenswrapper[4667]: I0131 03:52:50.787822 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 31 03:52:50 crc kubenswrapper[4667]: I0131 03:52:50.890152 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 31 03:52:51 crc kubenswrapper[4667]: I0131 03:52:51.125400 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 31 03:52:51 crc kubenswrapper[4667]: I0131 03:52:51.175101 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 31 03:52:51 crc kubenswrapper[4667]: I0131 03:52:51.308643 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 31 03:52:51 crc kubenswrapper[4667]: I0131 03:52:51.560265 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 03:52:51 crc kubenswrapper[4667]: I0131 03:52:51.575148 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 31 03:52:51 crc kubenswrapper[4667]: I0131 03:52:51.804016 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 31 03:52:51 crc kubenswrapper[4667]: I0131 03:52:51.857107 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 31 03:52:52 crc kubenswrapper[4667]: I0131 03:52:52.074944 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 31 03:52:52 crc kubenswrapper[4667]: I0131 03:52:52.588921 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 31 03:52:52 crc kubenswrapper[4667]: I0131 03:52:52.702431 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 03:52:53 crc kubenswrapper[4667]: I0131 03:52:53.031399 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 31 03:52:54 crc kubenswrapper[4667]: I0131 03:52:54.439062 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:54 crc kubenswrapper[4667]: I0131 03:52:54.440013 4667 scope.go:117] "RemoveContainer" containerID="f5da4f4d8da26aab93b0c89dfe86657e1e1cae5a927d84028af50b3dfbb86474" Jan 31 03:52:54 crc kubenswrapper[4667]: E0131 03:52:54.440218 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-7b485cc687-2l6jx_openshift-authentication(5892b5c0-3b90-4625-9d68-cadd28308327)\"" pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" podUID="5892b5c0-3b90-4625-9d68-cadd28308327" Jan 31 03:52:54 crc kubenswrapper[4667]: I0131 03:52:54.440607 4667 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:52:55 crc kubenswrapper[4667]: I0131 03:52:55.055436 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 31 03:52:55 crc kubenswrapper[4667]: I0131 03:52:55.055508 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 03:52:55 crc kubenswrapper[4667]: I0131 03:52:55.082815 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 31 03:52:55 crc kubenswrapper[4667]: I0131 03:52:55.082884 4667 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="2c6c729a2139023186db8d47849beefece1b15925f5567dae79c70255b22dea9" exitCode=137 Jan 31 03:52:55 crc kubenswrapper[4667]: I0131 03:52:55.083001 4667 scope.go:117] "RemoveContainer" containerID="2c6c729a2139023186db8d47849beefece1b15925f5567dae79c70255b22dea9" Jan 31 03:52:55 crc kubenswrapper[4667]: I0131 03:52:55.083057 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 03:52:55 crc kubenswrapper[4667]: I0131 03:52:55.083364 4667 scope.go:117] "RemoveContainer" containerID="f5da4f4d8da26aab93b0c89dfe86657e1e1cae5a927d84028af50b3dfbb86474" Jan 31 03:52:55 crc kubenswrapper[4667]: E0131 03:52:55.083555 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-7b485cc687-2l6jx_openshift-authentication(5892b5c0-3b90-4625-9d68-cadd28308327)\"" pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" podUID="5892b5c0-3b90-4625-9d68-cadd28308327" Jan 31 03:52:55 crc kubenswrapper[4667]: I0131 03:52:55.105449 4667 scope.go:117] "RemoveContainer" containerID="2c6c729a2139023186db8d47849beefece1b15925f5567dae79c70255b22dea9" Jan 31 03:52:55 crc kubenswrapper[4667]: E0131 03:52:55.105966 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c6c729a2139023186db8d47849beefece1b15925f5567dae79c70255b22dea9\": container with ID starting with 2c6c729a2139023186db8d47849beefece1b15925f5567dae79c70255b22dea9 not found: ID does not exist" containerID="2c6c729a2139023186db8d47849beefece1b15925f5567dae79c70255b22dea9" Jan 31 03:52:55 crc kubenswrapper[4667]: I0131 03:52:55.106016 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c6c729a2139023186db8d47849beefece1b15925f5567dae79c70255b22dea9"} err="failed to get container status \"2c6c729a2139023186db8d47849beefece1b15925f5567dae79c70255b22dea9\": rpc error: code = NotFound desc = could not find container \"2c6c729a2139023186db8d47849beefece1b15925f5567dae79c70255b22dea9\": container with ID starting with 2c6c729a2139023186db8d47849beefece1b15925f5567dae79c70255b22dea9 not found: ID does not exist" Jan 31 03:52:55 crc kubenswrapper[4667]: I0131 03:52:55.159561 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 03:52:55 crc kubenswrapper[4667]: I0131 03:52:55.159623 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 03:52:55 crc kubenswrapper[4667]: I0131 03:52:55.159647 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:52:55 crc kubenswrapper[4667]: I0131 03:52:55.159666 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 03:52:55 crc kubenswrapper[4667]: I0131 03:52:55.159691 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:52:55 crc kubenswrapper[4667]: I0131 03:52:55.159717 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:52:55 crc kubenswrapper[4667]: I0131 03:52:55.159775 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 03:52:55 crc kubenswrapper[4667]: I0131 03:52:55.159815 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:52:55 crc kubenswrapper[4667]: I0131 03:52:55.159989 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 03:52:55 crc kubenswrapper[4667]: I0131 03:52:55.161190 4667 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 31 03:52:55 crc kubenswrapper[4667]: I0131 03:52:55.161229 4667 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 31 03:52:55 crc kubenswrapper[4667]: I0131 03:52:55.161246 4667 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 31 03:52:55 crc kubenswrapper[4667]: I0131 03:52:55.161262 4667 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 31 03:52:55 crc kubenswrapper[4667]: I0131 03:52:55.175393 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:52:55 crc kubenswrapper[4667]: I0131 03:52:55.262685 4667 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 31 03:52:55 crc kubenswrapper[4667]: I0131 03:52:55.303830 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 31 03:52:55 crc kubenswrapper[4667]: I0131 03:52:55.304489 4667 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 31 03:52:55 crc kubenswrapper[4667]: I0131 03:52:55.314333 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 03:52:55 crc kubenswrapper[4667]: I0131 03:52:55.314383 4667 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="6bd3acaf-65fa-46c0-b6da-99645a5bdf87" Jan 31 03:52:55 crc kubenswrapper[4667]: I0131 03:52:55.318420 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 03:52:55 crc kubenswrapper[4667]: I0131 03:52:55.318465 4667 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="6bd3acaf-65fa-46c0-b6da-99645a5bdf87" Jan 31 03:52:57 crc kubenswrapper[4667]: I0131 03:52:57.055044 4667 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 31 03:53:06 crc kubenswrapper[4667]: I0131 03:53:06.170111 4667 generic.go:334] "Generic (PLEG): container finished" podID="d426c096-b6d9-4696-8066-2b9ec75356af" containerID="e94d4e8ba1eb248eeb1951f7ff5f115f37d3247fe9c66b1b88da95e50014b0ec" exitCode=0 Jan 31 03:53:06 crc kubenswrapper[4667]: I0131 03:53:06.170204 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7pbrg" event={"ID":"d426c096-b6d9-4696-8066-2b9ec75356af","Type":"ContainerDied","Data":"e94d4e8ba1eb248eeb1951f7ff5f115f37d3247fe9c66b1b88da95e50014b0ec"} Jan 31 03:53:06 crc kubenswrapper[4667]: I0131 03:53:06.172309 4667 scope.go:117] "RemoveContainer" containerID="e94d4e8ba1eb248eeb1951f7ff5f115f37d3247fe9c66b1b88da95e50014b0ec" Jan 31 03:53:07 crc kubenswrapper[4667]: I0131 03:53:07.181172 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7pbrg" event={"ID":"d426c096-b6d9-4696-8066-2b9ec75356af","Type":"ContainerStarted","Data":"19612eb55c241bb25ef7596fc023d970256a4902beb8863cd5ef2446af5663e4"} Jan 31 03:53:07 crc kubenswrapper[4667]: I0131 03:53:07.182882 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-7pbrg" Jan 31 03:53:07 crc kubenswrapper[4667]: I0131 03:53:07.190499 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-7pbrg" Jan 31 03:53:08 crc kubenswrapper[4667]: I0131 03:53:08.034661 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 31 03:53:08 crc kubenswrapper[4667]: I0131 03:53:08.282204 4667 scope.go:117] "RemoveContainer" containerID="f5da4f4d8da26aab93b0c89dfe86657e1e1cae5a927d84028af50b3dfbb86474" Jan 31 03:53:09 crc kubenswrapper[4667]: I0131 03:53:09.201712 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-7b485cc687-2l6jx_5892b5c0-3b90-4625-9d68-cadd28308327/oauth-openshift/1.log" Jan 31 03:53:09 crc kubenswrapper[4667]: I0131 03:53:09.202371 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" event={"ID":"5892b5c0-3b90-4625-9d68-cadd28308327","Type":"ContainerStarted","Data":"af30724685c27a26d458ac9e6427c4f37f3ac87c04a5a14adc3e3a27b7f8af01"} Jan 31 03:53:09 crc kubenswrapper[4667]: I0131 03:53:09.203095 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:53:09 crc kubenswrapper[4667]: I0131 03:53:09.212418 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7b485cc687-2l6jx" Jan 31 03:53:09 crc kubenswrapper[4667]: I0131 03:53:09.457124 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 31 03:53:24 crc kubenswrapper[4667]: I0131 03:53:24.249167 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 31 03:53:27 crc kubenswrapper[4667]: I0131 03:53:27.674392 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 31 03:53:56 crc kubenswrapper[4667]: I0131 03:53:56.744191 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gf8vs"] Jan 31 03:53:56 crc kubenswrapper[4667]: I0131 03:53:56.745165 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-gf8vs" podUID="baefc4bd-d927-4cf9-94af-eab8b042b3ca" containerName="controller-manager" containerID="cri-o://e64ef9d54d89ea09eae175e003404c5cdde2641ca34119c86752b41b57857afb" gracePeriod=30 Jan 31 03:53:56 crc kubenswrapper[4667]: I0131 03:53:56.832897 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5nl2"] Jan 31 03:53:56 crc kubenswrapper[4667]: I0131 03:53:56.833578 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5nl2" podUID="74c3828b-92ba-4a4a-bfeb-d5d02facdbdb" containerName="route-controller-manager" containerID="cri-o://4936970391d4c9923966a78873ffb18137cf2d816fb34c1589649063d522e4e5" gracePeriod=30 Jan 31 03:53:56 crc kubenswrapper[4667]: I0131 03:53:56.940890 4667 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-m5nl2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 31 03:53:56 crc kubenswrapper[4667]: I0131 03:53:56.940988 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5nl2" podUID="74c3828b-92ba-4a4a-bfeb-d5d02facdbdb" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.090316 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-gf8vs" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.190302 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5nl2" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.221365 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/baefc4bd-d927-4cf9-94af-eab8b042b3ca-serving-cert\") pod \"baefc4bd-d927-4cf9-94af-eab8b042b3ca\" (UID: \"baefc4bd-d927-4cf9-94af-eab8b042b3ca\") " Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.221520 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrqkz\" (UniqueName: \"kubernetes.io/projected/baefc4bd-d927-4cf9-94af-eab8b042b3ca-kube-api-access-rrqkz\") pod \"baefc4bd-d927-4cf9-94af-eab8b042b3ca\" (UID: \"baefc4bd-d927-4cf9-94af-eab8b042b3ca\") " Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.221562 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baefc4bd-d927-4cf9-94af-eab8b042b3ca-config\") pod \"baefc4bd-d927-4cf9-94af-eab8b042b3ca\" (UID: \"baefc4bd-d927-4cf9-94af-eab8b042b3ca\") " Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.221645 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/baefc4bd-d927-4cf9-94af-eab8b042b3ca-proxy-ca-bundles\") pod \"baefc4bd-d927-4cf9-94af-eab8b042b3ca\" (UID: \"baefc4bd-d927-4cf9-94af-eab8b042b3ca\") " Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.221726 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/baefc4bd-d927-4cf9-94af-eab8b042b3ca-client-ca\") pod \"baefc4bd-d927-4cf9-94af-eab8b042b3ca\" (UID: \"baefc4bd-d927-4cf9-94af-eab8b042b3ca\") " Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.223127 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/baefc4bd-d927-4cf9-94af-eab8b042b3ca-client-ca" (OuterVolumeSpecName: "client-ca") pod "baefc4bd-d927-4cf9-94af-eab8b042b3ca" (UID: "baefc4bd-d927-4cf9-94af-eab8b042b3ca"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.223539 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/baefc4bd-d927-4cf9-94af-eab8b042b3ca-config" (OuterVolumeSpecName: "config") pod "baefc4bd-d927-4cf9-94af-eab8b042b3ca" (UID: "baefc4bd-d927-4cf9-94af-eab8b042b3ca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.223604 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/baefc4bd-d927-4cf9-94af-eab8b042b3ca-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "baefc4bd-d927-4cf9-94af-eab8b042b3ca" (UID: "baefc4bd-d927-4cf9-94af-eab8b042b3ca"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.229097 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baefc4bd-d927-4cf9-94af-eab8b042b3ca-kube-api-access-rrqkz" (OuterVolumeSpecName: "kube-api-access-rrqkz") pod "baefc4bd-d927-4cf9-94af-eab8b042b3ca" (UID: "baefc4bd-d927-4cf9-94af-eab8b042b3ca"). InnerVolumeSpecName "kube-api-access-rrqkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.229628 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baefc4bd-d927-4cf9-94af-eab8b042b3ca-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "baefc4bd-d927-4cf9-94af-eab8b042b3ca" (UID: "baefc4bd-d927-4cf9-94af-eab8b042b3ca"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.323550 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74c3828b-92ba-4a4a-bfeb-d5d02facdbdb-serving-cert\") pod \"74c3828b-92ba-4a4a-bfeb-d5d02facdbdb\" (UID: \"74c3828b-92ba-4a4a-bfeb-d5d02facdbdb\") " Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.324024 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74c3828b-92ba-4a4a-bfeb-d5d02facdbdb-client-ca\") pod \"74c3828b-92ba-4a4a-bfeb-d5d02facdbdb\" (UID: \"74c3828b-92ba-4a4a-bfeb-d5d02facdbdb\") " Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.324117 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74c3828b-92ba-4a4a-bfeb-d5d02facdbdb-config\") pod \"74c3828b-92ba-4a4a-bfeb-d5d02facdbdb\" (UID: \"74c3828b-92ba-4a4a-bfeb-d5d02facdbdb\") " Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.324235 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs6x9\" (UniqueName: \"kubernetes.io/projected/74c3828b-92ba-4a4a-bfeb-d5d02facdbdb-kube-api-access-bs6x9\") pod \"74c3828b-92ba-4a4a-bfeb-d5d02facdbdb\" (UID: \"74c3828b-92ba-4a4a-bfeb-d5d02facdbdb\") " Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.324635 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrqkz\" (UniqueName: \"kubernetes.io/projected/baefc4bd-d927-4cf9-94af-eab8b042b3ca-kube-api-access-rrqkz\") on node \"crc\" DevicePath \"\"" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.324663 4667 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baefc4bd-d927-4cf9-94af-eab8b042b3ca-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.324683 4667 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/baefc4bd-d927-4cf9-94af-eab8b042b3ca-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.324697 4667 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/baefc4bd-d927-4cf9-94af-eab8b042b3ca-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.324710 4667 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/baefc4bd-d927-4cf9-94af-eab8b042b3ca-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.325709 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74c3828b-92ba-4a4a-bfeb-d5d02facdbdb-config" (OuterVolumeSpecName: "config") pod "74c3828b-92ba-4a4a-bfeb-d5d02facdbdb" (UID: "74c3828b-92ba-4a4a-bfeb-d5d02facdbdb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.325705 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74c3828b-92ba-4a4a-bfeb-d5d02facdbdb-client-ca" (OuterVolumeSpecName: "client-ca") pod "74c3828b-92ba-4a4a-bfeb-d5d02facdbdb" (UID: "74c3828b-92ba-4a4a-bfeb-d5d02facdbdb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.335564 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74c3828b-92ba-4a4a-bfeb-d5d02facdbdb-kube-api-access-bs6x9" (OuterVolumeSpecName: "kube-api-access-bs6x9") pod "74c3828b-92ba-4a4a-bfeb-d5d02facdbdb" (UID: "74c3828b-92ba-4a4a-bfeb-d5d02facdbdb"). InnerVolumeSpecName "kube-api-access-bs6x9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.335570 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74c3828b-92ba-4a4a-bfeb-d5d02facdbdb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "74c3828b-92ba-4a4a-bfeb-d5d02facdbdb" (UID: "74c3828b-92ba-4a4a-bfeb-d5d02facdbdb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.427019 4667 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74c3828b-92ba-4a4a-bfeb-d5d02facdbdb-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.427071 4667 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74c3828b-92ba-4a4a-bfeb-d5d02facdbdb-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.427082 4667 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74c3828b-92ba-4a4a-bfeb-d5d02facdbdb-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.427094 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bs6x9\" (UniqueName: \"kubernetes.io/projected/74c3828b-92ba-4a4a-bfeb-d5d02facdbdb-kube-api-access-bs6x9\") on node \"crc\" DevicePath \"\"" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.522266 4667 generic.go:334] "Generic (PLEG): container finished" podID="74c3828b-92ba-4a4a-bfeb-d5d02facdbdb" containerID="4936970391d4c9923966a78873ffb18137cf2d816fb34c1589649063d522e4e5" exitCode=0 Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.522350 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5nl2" event={"ID":"74c3828b-92ba-4a4a-bfeb-d5d02facdbdb","Type":"ContainerDied","Data":"4936970391d4c9923966a78873ffb18137cf2d816fb34c1589649063d522e4e5"} Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.522388 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5nl2" event={"ID":"74c3828b-92ba-4a4a-bfeb-d5d02facdbdb","Type":"ContainerDied","Data":"53a2efa49784ac503e56b29ef85a4caa7a4ec842dba8f8accc2c3ddfa4ea08c9"} Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.522407 4667 scope.go:117] "RemoveContainer" containerID="4936970391d4c9923966a78873ffb18137cf2d816fb34c1589649063d522e4e5" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.522595 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5nl2" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.528683 4667 generic.go:334] "Generic (PLEG): container finished" podID="baefc4bd-d927-4cf9-94af-eab8b042b3ca" containerID="e64ef9d54d89ea09eae175e003404c5cdde2641ca34119c86752b41b57857afb" exitCode=0 Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.528771 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-gf8vs" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.528793 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-gf8vs" event={"ID":"baefc4bd-d927-4cf9-94af-eab8b042b3ca","Type":"ContainerDied","Data":"e64ef9d54d89ea09eae175e003404c5cdde2641ca34119c86752b41b57857afb"} Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.529497 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-gf8vs" event={"ID":"baefc4bd-d927-4cf9-94af-eab8b042b3ca","Type":"ContainerDied","Data":"615432ec16782ca51a3e99ce4cc1389e6f5d09823ff24cee0fb38b69703b830e"} Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.560303 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gf8vs"] Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.566483 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gf8vs"] Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.567344 4667 scope.go:117] "RemoveContainer" containerID="4936970391d4c9923966a78873ffb18137cf2d816fb34c1589649063d522e4e5" Jan 31 03:53:57 crc kubenswrapper[4667]: E0131 03:53:57.567864 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4936970391d4c9923966a78873ffb18137cf2d816fb34c1589649063d522e4e5\": container with ID starting with 4936970391d4c9923966a78873ffb18137cf2d816fb34c1589649063d522e4e5 not found: ID does not exist" containerID="4936970391d4c9923966a78873ffb18137cf2d816fb34c1589649063d522e4e5" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.568040 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4936970391d4c9923966a78873ffb18137cf2d816fb34c1589649063d522e4e5"} err="failed to get container status \"4936970391d4c9923966a78873ffb18137cf2d816fb34c1589649063d522e4e5\": rpc error: code = NotFound desc = could not find container \"4936970391d4c9923966a78873ffb18137cf2d816fb34c1589649063d522e4e5\": container with ID starting with 4936970391d4c9923966a78873ffb18137cf2d816fb34c1589649063d522e4e5 not found: ID does not exist" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.568642 4667 scope.go:117] "RemoveContainer" containerID="e64ef9d54d89ea09eae175e003404c5cdde2641ca34119c86752b41b57857afb" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.579976 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5nl2"] Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.584616 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5nl2"] Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.598101 4667 scope.go:117] "RemoveContainer" containerID="e64ef9d54d89ea09eae175e003404c5cdde2641ca34119c86752b41b57857afb" Jan 31 03:53:57 crc kubenswrapper[4667]: E0131 03:53:57.599490 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e64ef9d54d89ea09eae175e003404c5cdde2641ca34119c86752b41b57857afb\": container with ID starting with e64ef9d54d89ea09eae175e003404c5cdde2641ca34119c86752b41b57857afb not found: ID does not exist" containerID="e64ef9d54d89ea09eae175e003404c5cdde2641ca34119c86752b41b57857afb" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.599573 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e64ef9d54d89ea09eae175e003404c5cdde2641ca34119c86752b41b57857afb"} err="failed to get container status \"e64ef9d54d89ea09eae175e003404c5cdde2641ca34119c86752b41b57857afb\": rpc error: code = NotFound desc = could not find container \"e64ef9d54d89ea09eae175e003404c5cdde2641ca34119c86752b41b57857afb\": container with ID starting with e64ef9d54d89ea09eae175e003404c5cdde2641ca34119c86752b41b57857afb not found: ID does not exist" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.714168 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6db9f6c479-98bfr"] Jan 31 03:53:57 crc kubenswrapper[4667]: E0131 03:53:57.714758 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baefc4bd-d927-4cf9-94af-eab8b042b3ca" containerName="controller-manager" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.714777 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="baefc4bd-d927-4cf9-94af-eab8b042b3ca" containerName="controller-manager" Jan 31 03:53:57 crc kubenswrapper[4667]: E0131 03:53:57.714800 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.714808 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 31 03:53:57 crc kubenswrapper[4667]: E0131 03:53:57.714825 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74c3828b-92ba-4a4a-bfeb-d5d02facdbdb" containerName="route-controller-manager" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.714833 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="74c3828b-92ba-4a4a-bfeb-d5d02facdbdb" containerName="route-controller-manager" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.715023 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.715042 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="74c3828b-92ba-4a4a-bfeb-d5d02facdbdb" containerName="route-controller-manager" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.715062 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="baefc4bd-d927-4cf9-94af-eab8b042b3ca" containerName="controller-manager" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.715701 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6db9f6c479-98bfr" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.718377 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.720313 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.720530 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.720925 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.721106 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.725744 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.737013 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6db9f6c479-98bfr"] Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.738368 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76d96ddfdf-n7rt2"] Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.739090 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76d96ddfdf-n7rt2" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.743650 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.743882 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.744057 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.744239 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.744444 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.745032 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.748203 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.760949 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76d96ddfdf-n7rt2"] Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.815558 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6db9f6c479-98bfr"] Jan 31 03:53:57 crc kubenswrapper[4667]: E0131 03:53:57.815968 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-tt8vv proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-6db9f6c479-98bfr" podUID="e264ffb8-e418-4ea6-bfe8-e20342eba977" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.833172 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75792672-3f54-4148-8b9c-689819ffe60a-serving-cert\") pod \"route-controller-manager-76d96ddfdf-n7rt2\" (UID: \"75792672-3f54-4148-8b9c-689819ffe60a\") " pod="openshift-route-controller-manager/route-controller-manager-76d96ddfdf-n7rt2" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.833259 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75792672-3f54-4148-8b9c-689819ffe60a-config\") pod \"route-controller-manager-76d96ddfdf-n7rt2\" (UID: \"75792672-3f54-4148-8b9c-689819ffe60a\") " pod="openshift-route-controller-manager/route-controller-manager-76d96ddfdf-n7rt2" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.833354 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e264ffb8-e418-4ea6-bfe8-e20342eba977-client-ca\") pod \"controller-manager-6db9f6c479-98bfr\" (UID: \"e264ffb8-e418-4ea6-bfe8-e20342eba977\") " pod="openshift-controller-manager/controller-manager-6db9f6c479-98bfr" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.833389 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e264ffb8-e418-4ea6-bfe8-e20342eba977-serving-cert\") pod \"controller-manager-6db9f6c479-98bfr\" (UID: \"e264ffb8-e418-4ea6-bfe8-e20342eba977\") " pod="openshift-controller-manager/controller-manager-6db9f6c479-98bfr" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.833409 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75792672-3f54-4148-8b9c-689819ffe60a-client-ca\") pod \"route-controller-manager-76d96ddfdf-n7rt2\" (UID: \"75792672-3f54-4148-8b9c-689819ffe60a\") " pod="openshift-route-controller-manager/route-controller-manager-76d96ddfdf-n7rt2" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.833434 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e264ffb8-e418-4ea6-bfe8-e20342eba977-config\") pod \"controller-manager-6db9f6c479-98bfr\" (UID: \"e264ffb8-e418-4ea6-bfe8-e20342eba977\") " pod="openshift-controller-manager/controller-manager-6db9f6c479-98bfr" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.833474 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vr7l\" (UniqueName: \"kubernetes.io/projected/75792672-3f54-4148-8b9c-689819ffe60a-kube-api-access-9vr7l\") pod \"route-controller-manager-76d96ddfdf-n7rt2\" (UID: \"75792672-3f54-4148-8b9c-689819ffe60a\") " pod="openshift-route-controller-manager/route-controller-manager-76d96ddfdf-n7rt2" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.833632 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt8vv\" (UniqueName: \"kubernetes.io/projected/e264ffb8-e418-4ea6-bfe8-e20342eba977-kube-api-access-tt8vv\") pod \"controller-manager-6db9f6c479-98bfr\" (UID: \"e264ffb8-e418-4ea6-bfe8-e20342eba977\") " pod="openshift-controller-manager/controller-manager-6db9f6c479-98bfr" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.833774 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e264ffb8-e418-4ea6-bfe8-e20342eba977-proxy-ca-bundles\") pod \"controller-manager-6db9f6c479-98bfr\" (UID: \"e264ffb8-e418-4ea6-bfe8-e20342eba977\") " pod="openshift-controller-manager/controller-manager-6db9f6c479-98bfr" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.934684 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt8vv\" (UniqueName: \"kubernetes.io/projected/e264ffb8-e418-4ea6-bfe8-e20342eba977-kube-api-access-tt8vv\") pod \"controller-manager-6db9f6c479-98bfr\" (UID: \"e264ffb8-e418-4ea6-bfe8-e20342eba977\") " pod="openshift-controller-manager/controller-manager-6db9f6c479-98bfr" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.934763 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e264ffb8-e418-4ea6-bfe8-e20342eba977-proxy-ca-bundles\") pod \"controller-manager-6db9f6c479-98bfr\" (UID: \"e264ffb8-e418-4ea6-bfe8-e20342eba977\") " pod="openshift-controller-manager/controller-manager-6db9f6c479-98bfr" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.934825 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75792672-3f54-4148-8b9c-689819ffe60a-serving-cert\") pod \"route-controller-manager-76d96ddfdf-n7rt2\" (UID: \"75792672-3f54-4148-8b9c-689819ffe60a\") " pod="openshift-route-controller-manager/route-controller-manager-76d96ddfdf-n7rt2" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.934878 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75792672-3f54-4148-8b9c-689819ffe60a-config\") pod \"route-controller-manager-76d96ddfdf-n7rt2\" (UID: \"75792672-3f54-4148-8b9c-689819ffe60a\") " pod="openshift-route-controller-manager/route-controller-manager-76d96ddfdf-n7rt2" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.934914 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e264ffb8-e418-4ea6-bfe8-e20342eba977-client-ca\") pod \"controller-manager-6db9f6c479-98bfr\" (UID: \"e264ffb8-e418-4ea6-bfe8-e20342eba977\") " pod="openshift-controller-manager/controller-manager-6db9f6c479-98bfr" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.934941 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e264ffb8-e418-4ea6-bfe8-e20342eba977-serving-cert\") pod \"controller-manager-6db9f6c479-98bfr\" (UID: \"e264ffb8-e418-4ea6-bfe8-e20342eba977\") " pod="openshift-controller-manager/controller-manager-6db9f6c479-98bfr" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.934963 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75792672-3f54-4148-8b9c-689819ffe60a-client-ca\") pod \"route-controller-manager-76d96ddfdf-n7rt2\" (UID: \"75792672-3f54-4148-8b9c-689819ffe60a\") " pod="openshift-route-controller-manager/route-controller-manager-76d96ddfdf-n7rt2" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.934985 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e264ffb8-e418-4ea6-bfe8-e20342eba977-config\") pod \"controller-manager-6db9f6c479-98bfr\" (UID: \"e264ffb8-e418-4ea6-bfe8-e20342eba977\") " pod="openshift-controller-manager/controller-manager-6db9f6c479-98bfr" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.935016 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vr7l\" (UniqueName: \"kubernetes.io/projected/75792672-3f54-4148-8b9c-689819ffe60a-kube-api-access-9vr7l\") pod \"route-controller-manager-76d96ddfdf-n7rt2\" (UID: \"75792672-3f54-4148-8b9c-689819ffe60a\") " pod="openshift-route-controller-manager/route-controller-manager-76d96ddfdf-n7rt2" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.936363 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75792672-3f54-4148-8b9c-689819ffe60a-client-ca\") pod \"route-controller-manager-76d96ddfdf-n7rt2\" (UID: \"75792672-3f54-4148-8b9c-689819ffe60a\") " pod="openshift-route-controller-manager/route-controller-manager-76d96ddfdf-n7rt2" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.936427 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e264ffb8-e418-4ea6-bfe8-e20342eba977-client-ca\") pod \"controller-manager-6db9f6c479-98bfr\" (UID: \"e264ffb8-e418-4ea6-bfe8-e20342eba977\") " pod="openshift-controller-manager/controller-manager-6db9f6c479-98bfr" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.936650 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75792672-3f54-4148-8b9c-689819ffe60a-config\") pod \"route-controller-manager-76d96ddfdf-n7rt2\" (UID: \"75792672-3f54-4148-8b9c-689819ffe60a\") " pod="openshift-route-controller-manager/route-controller-manager-76d96ddfdf-n7rt2" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.937046 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e264ffb8-e418-4ea6-bfe8-e20342eba977-config\") pod \"controller-manager-6db9f6c479-98bfr\" (UID: \"e264ffb8-e418-4ea6-bfe8-e20342eba977\") " pod="openshift-controller-manager/controller-manager-6db9f6c479-98bfr" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.937636 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e264ffb8-e418-4ea6-bfe8-e20342eba977-proxy-ca-bundles\") pod \"controller-manager-6db9f6c479-98bfr\" (UID: \"e264ffb8-e418-4ea6-bfe8-e20342eba977\") " pod="openshift-controller-manager/controller-manager-6db9f6c479-98bfr" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.940295 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75792672-3f54-4148-8b9c-689819ffe60a-serving-cert\") pod \"route-controller-manager-76d96ddfdf-n7rt2\" (UID: \"75792672-3f54-4148-8b9c-689819ffe60a\") " pod="openshift-route-controller-manager/route-controller-manager-76d96ddfdf-n7rt2" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.940294 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e264ffb8-e418-4ea6-bfe8-e20342eba977-serving-cert\") pod \"controller-manager-6db9f6c479-98bfr\" (UID: \"e264ffb8-e418-4ea6-bfe8-e20342eba977\") " pod="openshift-controller-manager/controller-manager-6db9f6c479-98bfr" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.957103 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt8vv\" (UniqueName: \"kubernetes.io/projected/e264ffb8-e418-4ea6-bfe8-e20342eba977-kube-api-access-tt8vv\") pod \"controller-manager-6db9f6c479-98bfr\" (UID: \"e264ffb8-e418-4ea6-bfe8-e20342eba977\") " pod="openshift-controller-manager/controller-manager-6db9f6c479-98bfr" Jan 31 03:53:57 crc kubenswrapper[4667]: I0131 03:53:57.964691 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vr7l\" (UniqueName: \"kubernetes.io/projected/75792672-3f54-4148-8b9c-689819ffe60a-kube-api-access-9vr7l\") pod \"route-controller-manager-76d96ddfdf-n7rt2\" (UID: \"75792672-3f54-4148-8b9c-689819ffe60a\") " pod="openshift-route-controller-manager/route-controller-manager-76d96ddfdf-n7rt2" Jan 31 03:53:58 crc kubenswrapper[4667]: I0131 03:53:58.066059 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76d96ddfdf-n7rt2" Jan 31 03:53:58 crc kubenswrapper[4667]: I0131 03:53:58.329632 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76d96ddfdf-n7rt2"] Jan 31 03:53:58 crc kubenswrapper[4667]: I0131 03:53:58.538719 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76d96ddfdf-n7rt2" event={"ID":"75792672-3f54-4148-8b9c-689819ffe60a","Type":"ContainerStarted","Data":"af0c0157a4956dd2e4e77d505e73db9b5558ce2ce21a98a14f2e4167ad0f3d4a"} Jan 31 03:53:58 crc kubenswrapper[4667]: I0131 03:53:58.538766 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6db9f6c479-98bfr" Jan 31 03:53:58 crc kubenswrapper[4667]: I0131 03:53:58.539351 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76d96ddfdf-n7rt2" event={"ID":"75792672-3f54-4148-8b9c-689819ffe60a","Type":"ContainerStarted","Data":"36d46d469efb668dc0c34fbcf5346d547b10c80517f8ea1266324fa665087a51"} Jan 31 03:53:58 crc kubenswrapper[4667]: I0131 03:53:58.541496 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-76d96ddfdf-n7rt2" Jan 31 03:53:58 crc kubenswrapper[4667]: I0131 03:53:58.548410 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6db9f6c479-98bfr" Jan 31 03:53:58 crc kubenswrapper[4667]: I0131 03:53:58.566549 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-76d96ddfdf-n7rt2" podStartSLOduration=1.566530319 podStartE2EDuration="1.566530319s" podCreationTimestamp="2026-01-31 03:53:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:53:58.560936369 +0000 UTC m=+362.077271678" watchObservedRunningTime="2026-01-31 03:53:58.566530319 +0000 UTC m=+362.082865618" Jan 31 03:53:58 crc kubenswrapper[4667]: I0131 03:53:58.645324 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e264ffb8-e418-4ea6-bfe8-e20342eba977-proxy-ca-bundles\") pod \"e264ffb8-e418-4ea6-bfe8-e20342eba977\" (UID: \"e264ffb8-e418-4ea6-bfe8-e20342eba977\") " Jan 31 03:53:58 crc kubenswrapper[4667]: I0131 03:53:58.645411 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e264ffb8-e418-4ea6-bfe8-e20342eba977-serving-cert\") pod \"e264ffb8-e418-4ea6-bfe8-e20342eba977\" (UID: \"e264ffb8-e418-4ea6-bfe8-e20342eba977\") " Jan 31 03:53:58 crc kubenswrapper[4667]: I0131 03:53:58.645447 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e264ffb8-e418-4ea6-bfe8-e20342eba977-client-ca\") pod \"e264ffb8-e418-4ea6-bfe8-e20342eba977\" (UID: \"e264ffb8-e418-4ea6-bfe8-e20342eba977\") " Jan 31 03:53:58 crc kubenswrapper[4667]: I0131 03:53:58.645470 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e264ffb8-e418-4ea6-bfe8-e20342eba977-config\") pod \"e264ffb8-e418-4ea6-bfe8-e20342eba977\" (UID: \"e264ffb8-e418-4ea6-bfe8-e20342eba977\") " Jan 31 03:53:58 crc kubenswrapper[4667]: I0131 03:53:58.645527 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tt8vv\" (UniqueName: \"kubernetes.io/projected/e264ffb8-e418-4ea6-bfe8-e20342eba977-kube-api-access-tt8vv\") pod \"e264ffb8-e418-4ea6-bfe8-e20342eba977\" (UID: \"e264ffb8-e418-4ea6-bfe8-e20342eba977\") " Jan 31 03:53:58 crc kubenswrapper[4667]: I0131 03:53:58.645784 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e264ffb8-e418-4ea6-bfe8-e20342eba977-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e264ffb8-e418-4ea6-bfe8-e20342eba977" (UID: "e264ffb8-e418-4ea6-bfe8-e20342eba977"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:53:58 crc kubenswrapper[4667]: I0131 03:53:58.646455 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e264ffb8-e418-4ea6-bfe8-e20342eba977-client-ca" (OuterVolumeSpecName: "client-ca") pod "e264ffb8-e418-4ea6-bfe8-e20342eba977" (UID: "e264ffb8-e418-4ea6-bfe8-e20342eba977"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:53:58 crc kubenswrapper[4667]: I0131 03:53:58.646962 4667 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e264ffb8-e418-4ea6-bfe8-e20342eba977-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 03:53:58 crc kubenswrapper[4667]: I0131 03:53:58.647069 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e264ffb8-e418-4ea6-bfe8-e20342eba977-config" (OuterVolumeSpecName: "config") pod "e264ffb8-e418-4ea6-bfe8-e20342eba977" (UID: "e264ffb8-e418-4ea6-bfe8-e20342eba977"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:53:58 crc kubenswrapper[4667]: I0131 03:53:58.650853 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e264ffb8-e418-4ea6-bfe8-e20342eba977-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e264ffb8-e418-4ea6-bfe8-e20342eba977" (UID: "e264ffb8-e418-4ea6-bfe8-e20342eba977"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:53:58 crc kubenswrapper[4667]: I0131 03:53:58.651174 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e264ffb8-e418-4ea6-bfe8-e20342eba977-kube-api-access-tt8vv" (OuterVolumeSpecName: "kube-api-access-tt8vv") pod "e264ffb8-e418-4ea6-bfe8-e20342eba977" (UID: "e264ffb8-e418-4ea6-bfe8-e20342eba977"). InnerVolumeSpecName "kube-api-access-tt8vv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:53:58 crc kubenswrapper[4667]: I0131 03:53:58.748037 4667 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e264ffb8-e418-4ea6-bfe8-e20342eba977-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:53:58 crc kubenswrapper[4667]: I0131 03:53:58.748071 4667 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e264ffb8-e418-4ea6-bfe8-e20342eba977-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:53:58 crc kubenswrapper[4667]: I0131 03:53:58.748080 4667 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e264ffb8-e418-4ea6-bfe8-e20342eba977-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:53:58 crc kubenswrapper[4667]: I0131 03:53:58.748090 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tt8vv\" (UniqueName: \"kubernetes.io/projected/e264ffb8-e418-4ea6-bfe8-e20342eba977-kube-api-access-tt8vv\") on node \"crc\" DevicePath \"\"" Jan 31 03:53:58 crc kubenswrapper[4667]: I0131 03:53:58.983636 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-76d96ddfdf-n7rt2" Jan 31 03:53:59 crc kubenswrapper[4667]: I0131 03:53:59.288633 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74c3828b-92ba-4a4a-bfeb-d5d02facdbdb" path="/var/lib/kubelet/pods/74c3828b-92ba-4a4a-bfeb-d5d02facdbdb/volumes" Jan 31 03:53:59 crc kubenswrapper[4667]: I0131 03:53:59.289149 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baefc4bd-d927-4cf9-94af-eab8b042b3ca" path="/var/lib/kubelet/pods/baefc4bd-d927-4cf9-94af-eab8b042b3ca/volumes" Jan 31 03:53:59 crc kubenswrapper[4667]: I0131 03:53:59.545028 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6db9f6c479-98bfr" Jan 31 03:53:59 crc kubenswrapper[4667]: I0131 03:53:59.603959 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-dbf56b754-w588l"] Jan 31 03:53:59 crc kubenswrapper[4667]: I0131 03:53:59.605402 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dbf56b754-w588l" Jan 31 03:53:59 crc kubenswrapper[4667]: I0131 03:53:59.613788 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 03:53:59 crc kubenswrapper[4667]: I0131 03:53:59.613863 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 03:53:59 crc kubenswrapper[4667]: I0131 03:53:59.614138 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6db9f6c479-98bfr"] Jan 31 03:53:59 crc kubenswrapper[4667]: I0131 03:53:59.615484 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 03:53:59 crc kubenswrapper[4667]: I0131 03:53:59.615581 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 03:53:59 crc kubenswrapper[4667]: I0131 03:53:59.617928 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 03:53:59 crc kubenswrapper[4667]: I0131 03:53:59.618135 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 03:53:59 crc kubenswrapper[4667]: I0131 03:53:59.630582 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6db9f6c479-98bfr"] Jan 31 03:53:59 crc kubenswrapper[4667]: I0131 03:53:59.633832 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 03:53:59 crc kubenswrapper[4667]: I0131 03:53:59.635871 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-dbf56b754-w588l"] Jan 31 03:53:59 crc kubenswrapper[4667]: I0131 03:53:59.722726 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/680ac2ec-d39f-4078-ae5e-beb315521768-config\") pod \"controller-manager-dbf56b754-w588l\" (UID: \"680ac2ec-d39f-4078-ae5e-beb315521768\") " pod="openshift-controller-manager/controller-manager-dbf56b754-w588l" Jan 31 03:53:59 crc kubenswrapper[4667]: I0131 03:53:59.722854 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/680ac2ec-d39f-4078-ae5e-beb315521768-proxy-ca-bundles\") pod \"controller-manager-dbf56b754-w588l\" (UID: \"680ac2ec-d39f-4078-ae5e-beb315521768\") " pod="openshift-controller-manager/controller-manager-dbf56b754-w588l" Jan 31 03:53:59 crc kubenswrapper[4667]: I0131 03:53:59.722906 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/680ac2ec-d39f-4078-ae5e-beb315521768-serving-cert\") pod \"controller-manager-dbf56b754-w588l\" (UID: \"680ac2ec-d39f-4078-ae5e-beb315521768\") " pod="openshift-controller-manager/controller-manager-dbf56b754-w588l" Jan 31 03:53:59 crc kubenswrapper[4667]: I0131 03:53:59.722996 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/680ac2ec-d39f-4078-ae5e-beb315521768-client-ca\") pod \"controller-manager-dbf56b754-w588l\" (UID: \"680ac2ec-d39f-4078-ae5e-beb315521768\") " pod="openshift-controller-manager/controller-manager-dbf56b754-w588l" Jan 31 03:53:59 crc kubenswrapper[4667]: I0131 03:53:59.723026 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqpqn\" (UniqueName: \"kubernetes.io/projected/680ac2ec-d39f-4078-ae5e-beb315521768-kube-api-access-mqpqn\") pod \"controller-manager-dbf56b754-w588l\" (UID: \"680ac2ec-d39f-4078-ae5e-beb315521768\") " pod="openshift-controller-manager/controller-manager-dbf56b754-w588l" Jan 31 03:53:59 crc kubenswrapper[4667]: I0131 03:53:59.824241 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/680ac2ec-d39f-4078-ae5e-beb315521768-serving-cert\") pod \"controller-manager-dbf56b754-w588l\" (UID: \"680ac2ec-d39f-4078-ae5e-beb315521768\") " pod="openshift-controller-manager/controller-manager-dbf56b754-w588l" Jan 31 03:53:59 crc kubenswrapper[4667]: I0131 03:53:59.824342 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/680ac2ec-d39f-4078-ae5e-beb315521768-client-ca\") pod \"controller-manager-dbf56b754-w588l\" (UID: \"680ac2ec-d39f-4078-ae5e-beb315521768\") " pod="openshift-controller-manager/controller-manager-dbf56b754-w588l" Jan 31 03:53:59 crc kubenswrapper[4667]: I0131 03:53:59.824374 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqpqn\" (UniqueName: \"kubernetes.io/projected/680ac2ec-d39f-4078-ae5e-beb315521768-kube-api-access-mqpqn\") pod \"controller-manager-dbf56b754-w588l\" (UID: \"680ac2ec-d39f-4078-ae5e-beb315521768\") " pod="openshift-controller-manager/controller-manager-dbf56b754-w588l" Jan 31 03:53:59 crc kubenswrapper[4667]: I0131 03:53:59.824430 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/680ac2ec-d39f-4078-ae5e-beb315521768-config\") pod \"controller-manager-dbf56b754-w588l\" (UID: \"680ac2ec-d39f-4078-ae5e-beb315521768\") " pod="openshift-controller-manager/controller-manager-dbf56b754-w588l" Jan 31 03:53:59 crc kubenswrapper[4667]: I0131 03:53:59.824477 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/680ac2ec-d39f-4078-ae5e-beb315521768-proxy-ca-bundles\") pod \"controller-manager-dbf56b754-w588l\" (UID: \"680ac2ec-d39f-4078-ae5e-beb315521768\") " pod="openshift-controller-manager/controller-manager-dbf56b754-w588l" Jan 31 03:53:59 crc kubenswrapper[4667]: I0131 03:53:59.826268 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/680ac2ec-d39f-4078-ae5e-beb315521768-proxy-ca-bundles\") pod \"controller-manager-dbf56b754-w588l\" (UID: \"680ac2ec-d39f-4078-ae5e-beb315521768\") " pod="openshift-controller-manager/controller-manager-dbf56b754-w588l" Jan 31 03:53:59 crc kubenswrapper[4667]: I0131 03:53:59.827472 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/680ac2ec-d39f-4078-ae5e-beb315521768-client-ca\") pod \"controller-manager-dbf56b754-w588l\" (UID: \"680ac2ec-d39f-4078-ae5e-beb315521768\") " pod="openshift-controller-manager/controller-manager-dbf56b754-w588l" Jan 31 03:53:59 crc kubenswrapper[4667]: I0131 03:53:59.828719 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/680ac2ec-d39f-4078-ae5e-beb315521768-config\") pod \"controller-manager-dbf56b754-w588l\" (UID: \"680ac2ec-d39f-4078-ae5e-beb315521768\") " pod="openshift-controller-manager/controller-manager-dbf56b754-w588l" Jan 31 03:53:59 crc kubenswrapper[4667]: I0131 03:53:59.835834 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/680ac2ec-d39f-4078-ae5e-beb315521768-serving-cert\") pod \"controller-manager-dbf56b754-w588l\" (UID: \"680ac2ec-d39f-4078-ae5e-beb315521768\") " pod="openshift-controller-manager/controller-manager-dbf56b754-w588l" Jan 31 03:53:59 crc kubenswrapper[4667]: I0131 03:53:59.846093 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqpqn\" (UniqueName: \"kubernetes.io/projected/680ac2ec-d39f-4078-ae5e-beb315521768-kube-api-access-mqpqn\") pod \"controller-manager-dbf56b754-w588l\" (UID: \"680ac2ec-d39f-4078-ae5e-beb315521768\") " pod="openshift-controller-manager/controller-manager-dbf56b754-w588l" Jan 31 03:53:59 crc kubenswrapper[4667]: I0131 03:53:59.937051 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dbf56b754-w588l" Jan 31 03:54:00 crc kubenswrapper[4667]: I0131 03:54:00.436254 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-dbf56b754-w588l"] Jan 31 03:54:00 crc kubenswrapper[4667]: I0131 03:54:00.550931 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dbf56b754-w588l" event={"ID":"680ac2ec-d39f-4078-ae5e-beb315521768","Type":"ContainerStarted","Data":"fd3f757847454b75f52531954fabccb11f59fa4345e3c00414839213ff5878e7"} Jan 31 03:54:01 crc kubenswrapper[4667]: I0131 03:54:01.289574 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e264ffb8-e418-4ea6-bfe8-e20342eba977" path="/var/lib/kubelet/pods/e264ffb8-e418-4ea6-bfe8-e20342eba977/volumes" Jan 31 03:54:01 crc kubenswrapper[4667]: I0131 03:54:01.558052 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dbf56b754-w588l" event={"ID":"680ac2ec-d39f-4078-ae5e-beb315521768","Type":"ContainerStarted","Data":"6a755c82af29d6acdea81a645b7e1cb3fb4d5a99f44ad0e5f11e8704df52bfea"} Jan 31 03:54:01 crc kubenswrapper[4667]: I0131 03:54:01.559175 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-dbf56b754-w588l" Jan 31 03:54:01 crc kubenswrapper[4667]: I0131 03:54:01.565415 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-dbf56b754-w588l" Jan 31 03:54:01 crc kubenswrapper[4667]: I0131 03:54:01.591436 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-dbf56b754-w588l" podStartSLOduration=4.591411387 podStartE2EDuration="4.591411387s" podCreationTimestamp="2026-01-31 03:53:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:54:01.586552047 +0000 UTC m=+365.102887346" watchObservedRunningTime="2026-01-31 03:54:01.591411387 +0000 UTC m=+365.107746686" Jan 31 03:54:15 crc kubenswrapper[4667]: I0131 03:54:15.704164 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 03:54:15 crc kubenswrapper[4667]: I0131 03:54:15.705679 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 03:54:16 crc kubenswrapper[4667]: I0131 03:54:16.744038 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-dbf56b754-w588l"] Jan 31 03:54:16 crc kubenswrapper[4667]: I0131 03:54:16.744628 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-dbf56b754-w588l" podUID="680ac2ec-d39f-4078-ae5e-beb315521768" containerName="controller-manager" containerID="cri-o://6a755c82af29d6acdea81a645b7e1cb3fb4d5a99f44ad0e5f11e8704df52bfea" gracePeriod=30 Jan 31 03:54:17 crc kubenswrapper[4667]: I0131 03:54:17.327132 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dbf56b754-w588l" Jan 31 03:54:17 crc kubenswrapper[4667]: I0131 03:54:17.409796 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/680ac2ec-d39f-4078-ae5e-beb315521768-client-ca\") pod \"680ac2ec-d39f-4078-ae5e-beb315521768\" (UID: \"680ac2ec-d39f-4078-ae5e-beb315521768\") " Jan 31 03:54:17 crc kubenswrapper[4667]: I0131 03:54:17.409962 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/680ac2ec-d39f-4078-ae5e-beb315521768-serving-cert\") pod \"680ac2ec-d39f-4078-ae5e-beb315521768\" (UID: \"680ac2ec-d39f-4078-ae5e-beb315521768\") " Jan 31 03:54:17 crc kubenswrapper[4667]: I0131 03:54:17.410031 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqpqn\" (UniqueName: \"kubernetes.io/projected/680ac2ec-d39f-4078-ae5e-beb315521768-kube-api-access-mqpqn\") pod \"680ac2ec-d39f-4078-ae5e-beb315521768\" (UID: \"680ac2ec-d39f-4078-ae5e-beb315521768\") " Jan 31 03:54:17 crc kubenswrapper[4667]: I0131 03:54:17.410073 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/680ac2ec-d39f-4078-ae5e-beb315521768-proxy-ca-bundles\") pod \"680ac2ec-d39f-4078-ae5e-beb315521768\" (UID: \"680ac2ec-d39f-4078-ae5e-beb315521768\") " Jan 31 03:54:17 crc kubenswrapper[4667]: I0131 03:54:17.410126 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/680ac2ec-d39f-4078-ae5e-beb315521768-config\") pod \"680ac2ec-d39f-4078-ae5e-beb315521768\" (UID: \"680ac2ec-d39f-4078-ae5e-beb315521768\") " Jan 31 03:54:17 crc kubenswrapper[4667]: I0131 03:54:17.411688 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/680ac2ec-d39f-4078-ae5e-beb315521768-config" (OuterVolumeSpecName: "config") pod "680ac2ec-d39f-4078-ae5e-beb315521768" (UID: "680ac2ec-d39f-4078-ae5e-beb315521768"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:54:17 crc kubenswrapper[4667]: I0131 03:54:17.412907 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/680ac2ec-d39f-4078-ae5e-beb315521768-client-ca" (OuterVolumeSpecName: "client-ca") pod "680ac2ec-d39f-4078-ae5e-beb315521768" (UID: "680ac2ec-d39f-4078-ae5e-beb315521768"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:54:17 crc kubenswrapper[4667]: I0131 03:54:17.413276 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/680ac2ec-d39f-4078-ae5e-beb315521768-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "680ac2ec-d39f-4078-ae5e-beb315521768" (UID: "680ac2ec-d39f-4078-ae5e-beb315521768"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:54:17 crc kubenswrapper[4667]: I0131 03:54:17.425277 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/680ac2ec-d39f-4078-ae5e-beb315521768-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "680ac2ec-d39f-4078-ae5e-beb315521768" (UID: "680ac2ec-d39f-4078-ae5e-beb315521768"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:54:17 crc kubenswrapper[4667]: I0131 03:54:17.433914 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/680ac2ec-d39f-4078-ae5e-beb315521768-kube-api-access-mqpqn" (OuterVolumeSpecName: "kube-api-access-mqpqn") pod "680ac2ec-d39f-4078-ae5e-beb315521768" (UID: "680ac2ec-d39f-4078-ae5e-beb315521768"). InnerVolumeSpecName "kube-api-access-mqpqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:54:17 crc kubenswrapper[4667]: I0131 03:54:17.511487 4667 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/680ac2ec-d39f-4078-ae5e-beb315521768-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:54:17 crc kubenswrapper[4667]: I0131 03:54:17.511518 4667 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/680ac2ec-d39f-4078-ae5e-beb315521768-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:54:17 crc kubenswrapper[4667]: I0131 03:54:17.511529 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqpqn\" (UniqueName: \"kubernetes.io/projected/680ac2ec-d39f-4078-ae5e-beb315521768-kube-api-access-mqpqn\") on node \"crc\" DevicePath \"\"" Jan 31 03:54:17 crc kubenswrapper[4667]: I0131 03:54:17.511539 4667 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/680ac2ec-d39f-4078-ae5e-beb315521768-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 03:54:17 crc kubenswrapper[4667]: I0131 03:54:17.511549 4667 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/680ac2ec-d39f-4078-ae5e-beb315521768-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:54:17 crc kubenswrapper[4667]: I0131 03:54:17.703179 4667 generic.go:334] "Generic (PLEG): container finished" podID="680ac2ec-d39f-4078-ae5e-beb315521768" containerID="6a755c82af29d6acdea81a645b7e1cb3fb4d5a99f44ad0e5f11e8704df52bfea" exitCode=0 Jan 31 03:54:17 crc kubenswrapper[4667]: I0131 03:54:17.703227 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dbf56b754-w588l" event={"ID":"680ac2ec-d39f-4078-ae5e-beb315521768","Type":"ContainerDied","Data":"6a755c82af29d6acdea81a645b7e1cb3fb4d5a99f44ad0e5f11e8704df52bfea"} Jan 31 03:54:17 crc kubenswrapper[4667]: I0131 03:54:17.703255 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dbf56b754-w588l" event={"ID":"680ac2ec-d39f-4078-ae5e-beb315521768","Type":"ContainerDied","Data":"fd3f757847454b75f52531954fabccb11f59fa4345e3c00414839213ff5878e7"} Jan 31 03:54:17 crc kubenswrapper[4667]: I0131 03:54:17.703273 4667 scope.go:117] "RemoveContainer" containerID="6a755c82af29d6acdea81a645b7e1cb3fb4d5a99f44ad0e5f11e8704df52bfea" Jan 31 03:54:17 crc kubenswrapper[4667]: I0131 03:54:17.703291 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dbf56b754-w588l" Jan 31 03:54:17 crc kubenswrapper[4667]: I0131 03:54:17.724873 4667 scope.go:117] "RemoveContainer" containerID="6a755c82af29d6acdea81a645b7e1cb3fb4d5a99f44ad0e5f11e8704df52bfea" Jan 31 03:54:17 crc kubenswrapper[4667]: E0131 03:54:17.725488 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a755c82af29d6acdea81a645b7e1cb3fb4d5a99f44ad0e5f11e8704df52bfea\": container with ID starting with 6a755c82af29d6acdea81a645b7e1cb3fb4d5a99f44ad0e5f11e8704df52bfea not found: ID does not exist" containerID="6a755c82af29d6acdea81a645b7e1cb3fb4d5a99f44ad0e5f11e8704df52bfea" Jan 31 03:54:17 crc kubenswrapper[4667]: I0131 03:54:17.725535 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a755c82af29d6acdea81a645b7e1cb3fb4d5a99f44ad0e5f11e8704df52bfea"} err="failed to get container status \"6a755c82af29d6acdea81a645b7e1cb3fb4d5a99f44ad0e5f11e8704df52bfea\": rpc error: code = NotFound desc = could not find container \"6a755c82af29d6acdea81a645b7e1cb3fb4d5a99f44ad0e5f11e8704df52bfea\": container with ID starting with 6a755c82af29d6acdea81a645b7e1cb3fb4d5a99f44ad0e5f11e8704df52bfea not found: ID does not exist" Jan 31 03:54:17 crc kubenswrapper[4667]: I0131 03:54:17.746950 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-dbf56b754-w588l"] Jan 31 03:54:17 crc kubenswrapper[4667]: I0131 03:54:17.750694 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-dbf56b754-w588l"] Jan 31 03:54:18 crc kubenswrapper[4667]: I0131 03:54:18.299122 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6db9f6c479-wzkdf"] Jan 31 03:54:18 crc kubenswrapper[4667]: E0131 03:54:18.299346 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="680ac2ec-d39f-4078-ae5e-beb315521768" containerName="controller-manager" Jan 31 03:54:18 crc kubenswrapper[4667]: I0131 03:54:18.299359 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="680ac2ec-d39f-4078-ae5e-beb315521768" containerName="controller-manager" Jan 31 03:54:18 crc kubenswrapper[4667]: I0131 03:54:18.299442 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="680ac2ec-d39f-4078-ae5e-beb315521768" containerName="controller-manager" Jan 31 03:54:18 crc kubenswrapper[4667]: I0131 03:54:18.299795 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6db9f6c479-wzkdf" Jan 31 03:54:18 crc kubenswrapper[4667]: I0131 03:54:18.301956 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 03:54:18 crc kubenswrapper[4667]: I0131 03:54:18.302004 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 03:54:18 crc kubenswrapper[4667]: I0131 03:54:18.302136 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 03:54:18 crc kubenswrapper[4667]: I0131 03:54:18.302286 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 03:54:18 crc kubenswrapper[4667]: I0131 03:54:18.304454 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 03:54:18 crc kubenswrapper[4667]: I0131 03:54:18.305270 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 03:54:18 crc kubenswrapper[4667]: I0131 03:54:18.312818 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6db9f6c479-wzkdf"] Jan 31 03:54:18 crc kubenswrapper[4667]: I0131 03:54:18.317312 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 03:54:18 crc kubenswrapper[4667]: I0131 03:54:18.422569 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8379eb22-8ca8-45de-9935-70eba4e93af8-serving-cert\") pod \"controller-manager-6db9f6c479-wzkdf\" (UID: \"8379eb22-8ca8-45de-9935-70eba4e93af8\") " pod="openshift-controller-manager/controller-manager-6db9f6c479-wzkdf" Jan 31 03:54:18 crc kubenswrapper[4667]: I0131 03:54:18.422621 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smfmz\" (UniqueName: \"kubernetes.io/projected/8379eb22-8ca8-45de-9935-70eba4e93af8-kube-api-access-smfmz\") pod \"controller-manager-6db9f6c479-wzkdf\" (UID: \"8379eb22-8ca8-45de-9935-70eba4e93af8\") " pod="openshift-controller-manager/controller-manager-6db9f6c479-wzkdf" Jan 31 03:54:18 crc kubenswrapper[4667]: I0131 03:54:18.422651 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8379eb22-8ca8-45de-9935-70eba4e93af8-proxy-ca-bundles\") pod \"controller-manager-6db9f6c479-wzkdf\" (UID: \"8379eb22-8ca8-45de-9935-70eba4e93af8\") " pod="openshift-controller-manager/controller-manager-6db9f6c479-wzkdf" Jan 31 03:54:18 crc kubenswrapper[4667]: I0131 03:54:18.422713 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8379eb22-8ca8-45de-9935-70eba4e93af8-config\") pod \"controller-manager-6db9f6c479-wzkdf\" (UID: \"8379eb22-8ca8-45de-9935-70eba4e93af8\") " pod="openshift-controller-manager/controller-manager-6db9f6c479-wzkdf" Jan 31 03:54:18 crc kubenswrapper[4667]: I0131 03:54:18.422964 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8379eb22-8ca8-45de-9935-70eba4e93af8-client-ca\") pod \"controller-manager-6db9f6c479-wzkdf\" (UID: \"8379eb22-8ca8-45de-9935-70eba4e93af8\") " pod="openshift-controller-manager/controller-manager-6db9f6c479-wzkdf" Jan 31 03:54:18 crc kubenswrapper[4667]: I0131 03:54:18.523744 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8379eb22-8ca8-45de-9935-70eba4e93af8-config\") pod \"controller-manager-6db9f6c479-wzkdf\" (UID: \"8379eb22-8ca8-45de-9935-70eba4e93af8\") " pod="openshift-controller-manager/controller-manager-6db9f6c479-wzkdf" Jan 31 03:54:18 crc kubenswrapper[4667]: I0131 03:54:18.523821 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8379eb22-8ca8-45de-9935-70eba4e93af8-client-ca\") pod \"controller-manager-6db9f6c479-wzkdf\" (UID: \"8379eb22-8ca8-45de-9935-70eba4e93af8\") " pod="openshift-controller-manager/controller-manager-6db9f6c479-wzkdf" Jan 31 03:54:18 crc kubenswrapper[4667]: I0131 03:54:18.523865 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8379eb22-8ca8-45de-9935-70eba4e93af8-serving-cert\") pod \"controller-manager-6db9f6c479-wzkdf\" (UID: \"8379eb22-8ca8-45de-9935-70eba4e93af8\") " pod="openshift-controller-manager/controller-manager-6db9f6c479-wzkdf" Jan 31 03:54:18 crc kubenswrapper[4667]: I0131 03:54:18.523887 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smfmz\" (UniqueName: \"kubernetes.io/projected/8379eb22-8ca8-45de-9935-70eba4e93af8-kube-api-access-smfmz\") pod \"controller-manager-6db9f6c479-wzkdf\" (UID: \"8379eb22-8ca8-45de-9935-70eba4e93af8\") " pod="openshift-controller-manager/controller-manager-6db9f6c479-wzkdf" Jan 31 03:54:18 crc kubenswrapper[4667]: I0131 03:54:18.523906 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8379eb22-8ca8-45de-9935-70eba4e93af8-proxy-ca-bundles\") pod \"controller-manager-6db9f6c479-wzkdf\" (UID: \"8379eb22-8ca8-45de-9935-70eba4e93af8\") " pod="openshift-controller-manager/controller-manager-6db9f6c479-wzkdf" Jan 31 03:54:18 crc kubenswrapper[4667]: I0131 03:54:18.524922 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8379eb22-8ca8-45de-9935-70eba4e93af8-client-ca\") pod \"controller-manager-6db9f6c479-wzkdf\" (UID: \"8379eb22-8ca8-45de-9935-70eba4e93af8\") " pod="openshift-controller-manager/controller-manager-6db9f6c479-wzkdf" Jan 31 03:54:18 crc kubenswrapper[4667]: I0131 03:54:18.525254 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8379eb22-8ca8-45de-9935-70eba4e93af8-proxy-ca-bundles\") pod \"controller-manager-6db9f6c479-wzkdf\" (UID: \"8379eb22-8ca8-45de-9935-70eba4e93af8\") " pod="openshift-controller-manager/controller-manager-6db9f6c479-wzkdf" Jan 31 03:54:18 crc kubenswrapper[4667]: I0131 03:54:18.526238 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8379eb22-8ca8-45de-9935-70eba4e93af8-config\") pod \"controller-manager-6db9f6c479-wzkdf\" (UID: \"8379eb22-8ca8-45de-9935-70eba4e93af8\") " pod="openshift-controller-manager/controller-manager-6db9f6c479-wzkdf" Jan 31 03:54:18 crc kubenswrapper[4667]: I0131 03:54:18.528663 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8379eb22-8ca8-45de-9935-70eba4e93af8-serving-cert\") pod \"controller-manager-6db9f6c479-wzkdf\" (UID: \"8379eb22-8ca8-45de-9935-70eba4e93af8\") " pod="openshift-controller-manager/controller-manager-6db9f6c479-wzkdf" Jan 31 03:54:18 crc kubenswrapper[4667]: I0131 03:54:18.545197 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smfmz\" (UniqueName: \"kubernetes.io/projected/8379eb22-8ca8-45de-9935-70eba4e93af8-kube-api-access-smfmz\") pod \"controller-manager-6db9f6c479-wzkdf\" (UID: \"8379eb22-8ca8-45de-9935-70eba4e93af8\") " pod="openshift-controller-manager/controller-manager-6db9f6c479-wzkdf" Jan 31 03:54:18 crc kubenswrapper[4667]: I0131 03:54:18.633795 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6db9f6c479-wzkdf" Jan 31 03:54:18 crc kubenswrapper[4667]: I0131 03:54:18.867967 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6db9f6c479-wzkdf"] Jan 31 03:54:18 crc kubenswrapper[4667]: W0131 03:54:18.885737 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8379eb22_8ca8_45de_9935_70eba4e93af8.slice/crio-52a66fd758b6a98a8e6d5ebc7507e9e2df29a414dc3be13426d6cb22ef10c7c1 WatchSource:0}: Error finding container 52a66fd758b6a98a8e6d5ebc7507e9e2df29a414dc3be13426d6cb22ef10c7c1: Status 404 returned error can't find the container with id 52a66fd758b6a98a8e6d5ebc7507e9e2df29a414dc3be13426d6cb22ef10c7c1 Jan 31 03:54:19 crc kubenswrapper[4667]: I0131 03:54:19.293254 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="680ac2ec-d39f-4078-ae5e-beb315521768" path="/var/lib/kubelet/pods/680ac2ec-d39f-4078-ae5e-beb315521768/volumes" Jan 31 03:54:19 crc kubenswrapper[4667]: I0131 03:54:19.740003 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6db9f6c479-wzkdf" event={"ID":"8379eb22-8ca8-45de-9935-70eba4e93af8","Type":"ContainerStarted","Data":"e494acc3048bce1756be3d436f2bd2c5599203ccc3a46285db788aa683d1123d"} Jan 31 03:54:19 crc kubenswrapper[4667]: I0131 03:54:19.740049 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6db9f6c479-wzkdf" event={"ID":"8379eb22-8ca8-45de-9935-70eba4e93af8","Type":"ContainerStarted","Data":"52a66fd758b6a98a8e6d5ebc7507e9e2df29a414dc3be13426d6cb22ef10c7c1"} Jan 31 03:54:19 crc kubenswrapper[4667]: I0131 03:54:19.740388 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6db9f6c479-wzkdf" Jan 31 03:54:19 crc kubenswrapper[4667]: I0131 03:54:19.744931 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6db9f6c479-wzkdf" Jan 31 03:54:19 crc kubenswrapper[4667]: I0131 03:54:19.760070 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6db9f6c479-wzkdf" podStartSLOduration=3.760046054 podStartE2EDuration="3.760046054s" podCreationTimestamp="2026-01-31 03:54:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:54:19.757346102 +0000 UTC m=+383.273681391" watchObservedRunningTime="2026-01-31 03:54:19.760046054 +0000 UTC m=+383.276381353" Jan 31 03:54:19 crc kubenswrapper[4667]: I0131 03:54:19.901976 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-hhq8d"] Jan 31 03:54:19 crc kubenswrapper[4667]: I0131 03:54:19.902661 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-hhq8d" Jan 31 03:54:19 crc kubenswrapper[4667]: I0131 03:54:19.919890 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-hhq8d"] Jan 31 03:54:20 crc kubenswrapper[4667]: I0131 03:54:20.049165 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c8cd6071-dafe-4be1-96dd-3c565ed4d67d-trusted-ca\") pod \"image-registry-66df7c8f76-hhq8d\" (UID: \"c8cd6071-dafe-4be1-96dd-3c565ed4d67d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhq8d" Jan 31 03:54:20 crc kubenswrapper[4667]: I0131 03:54:20.049234 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c8cd6071-dafe-4be1-96dd-3c565ed4d67d-registry-certificates\") pod \"image-registry-66df7c8f76-hhq8d\" (UID: \"c8cd6071-dafe-4be1-96dd-3c565ed4d67d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhq8d" Jan 31 03:54:20 crc kubenswrapper[4667]: I0131 03:54:20.049279 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-hhq8d\" (UID: \"c8cd6071-dafe-4be1-96dd-3c565ed4d67d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhq8d" Jan 31 03:54:20 crc kubenswrapper[4667]: I0131 03:54:20.049369 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c8cd6071-dafe-4be1-96dd-3c565ed4d67d-bound-sa-token\") pod \"image-registry-66df7c8f76-hhq8d\" (UID: \"c8cd6071-dafe-4be1-96dd-3c565ed4d67d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhq8d" Jan 31 03:54:20 crc kubenswrapper[4667]: I0131 03:54:20.049515 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c8cd6071-dafe-4be1-96dd-3c565ed4d67d-registry-tls\") pod \"image-registry-66df7c8f76-hhq8d\" (UID: \"c8cd6071-dafe-4be1-96dd-3c565ed4d67d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhq8d" Jan 31 03:54:20 crc kubenswrapper[4667]: I0131 03:54:20.049610 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c8cd6071-dafe-4be1-96dd-3c565ed4d67d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-hhq8d\" (UID: \"c8cd6071-dafe-4be1-96dd-3c565ed4d67d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhq8d" Jan 31 03:54:20 crc kubenswrapper[4667]: I0131 03:54:20.049642 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c8cd6071-dafe-4be1-96dd-3c565ed4d67d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-hhq8d\" (UID: \"c8cd6071-dafe-4be1-96dd-3c565ed4d67d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhq8d" Jan 31 03:54:20 crc kubenswrapper[4667]: I0131 03:54:20.049721 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4pd8\" (UniqueName: \"kubernetes.io/projected/c8cd6071-dafe-4be1-96dd-3c565ed4d67d-kube-api-access-p4pd8\") pod \"image-registry-66df7c8f76-hhq8d\" (UID: \"c8cd6071-dafe-4be1-96dd-3c565ed4d67d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhq8d" Jan 31 03:54:20 crc kubenswrapper[4667]: I0131 03:54:20.073655 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-hhq8d\" (UID: \"c8cd6071-dafe-4be1-96dd-3c565ed4d67d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhq8d" Jan 31 03:54:20 crc kubenswrapper[4667]: I0131 03:54:20.151235 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c8cd6071-dafe-4be1-96dd-3c565ed4d67d-trusted-ca\") pod \"image-registry-66df7c8f76-hhq8d\" (UID: \"c8cd6071-dafe-4be1-96dd-3c565ed4d67d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhq8d" Jan 31 03:54:20 crc kubenswrapper[4667]: I0131 03:54:20.151287 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c8cd6071-dafe-4be1-96dd-3c565ed4d67d-registry-certificates\") pod \"image-registry-66df7c8f76-hhq8d\" (UID: \"c8cd6071-dafe-4be1-96dd-3c565ed4d67d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhq8d" Jan 31 03:54:20 crc kubenswrapper[4667]: I0131 03:54:20.151312 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c8cd6071-dafe-4be1-96dd-3c565ed4d67d-bound-sa-token\") pod \"image-registry-66df7c8f76-hhq8d\" (UID: \"c8cd6071-dafe-4be1-96dd-3c565ed4d67d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhq8d" Jan 31 03:54:20 crc kubenswrapper[4667]: I0131 03:54:20.151346 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c8cd6071-dafe-4be1-96dd-3c565ed4d67d-registry-tls\") pod \"image-registry-66df7c8f76-hhq8d\" (UID: \"c8cd6071-dafe-4be1-96dd-3c565ed4d67d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhq8d" Jan 31 03:54:20 crc kubenswrapper[4667]: I0131 03:54:20.151372 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c8cd6071-dafe-4be1-96dd-3c565ed4d67d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-hhq8d\" (UID: \"c8cd6071-dafe-4be1-96dd-3c565ed4d67d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhq8d" Jan 31 03:54:20 crc kubenswrapper[4667]: I0131 03:54:20.151399 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c8cd6071-dafe-4be1-96dd-3c565ed4d67d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-hhq8d\" (UID: \"c8cd6071-dafe-4be1-96dd-3c565ed4d67d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhq8d" Jan 31 03:54:20 crc kubenswrapper[4667]: I0131 03:54:20.151426 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4pd8\" (UniqueName: \"kubernetes.io/projected/c8cd6071-dafe-4be1-96dd-3c565ed4d67d-kube-api-access-p4pd8\") pod \"image-registry-66df7c8f76-hhq8d\" (UID: \"c8cd6071-dafe-4be1-96dd-3c565ed4d67d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhq8d" Jan 31 03:54:20 crc kubenswrapper[4667]: I0131 03:54:20.152495 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c8cd6071-dafe-4be1-96dd-3c565ed4d67d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-hhq8d\" (UID: \"c8cd6071-dafe-4be1-96dd-3c565ed4d67d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhq8d" Jan 31 03:54:20 crc kubenswrapper[4667]: I0131 03:54:20.153184 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c8cd6071-dafe-4be1-96dd-3c565ed4d67d-trusted-ca\") pod \"image-registry-66df7c8f76-hhq8d\" (UID: \"c8cd6071-dafe-4be1-96dd-3c565ed4d67d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhq8d" Jan 31 03:54:20 crc kubenswrapper[4667]: I0131 03:54:20.153412 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c8cd6071-dafe-4be1-96dd-3c565ed4d67d-registry-certificates\") pod \"image-registry-66df7c8f76-hhq8d\" (UID: \"c8cd6071-dafe-4be1-96dd-3c565ed4d67d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhq8d" Jan 31 03:54:20 crc kubenswrapper[4667]: I0131 03:54:20.162438 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c8cd6071-dafe-4be1-96dd-3c565ed4d67d-registry-tls\") pod \"image-registry-66df7c8f76-hhq8d\" (UID: \"c8cd6071-dafe-4be1-96dd-3c565ed4d67d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhq8d" Jan 31 03:54:20 crc kubenswrapper[4667]: I0131 03:54:20.163451 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c8cd6071-dafe-4be1-96dd-3c565ed4d67d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-hhq8d\" (UID: \"c8cd6071-dafe-4be1-96dd-3c565ed4d67d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhq8d" Jan 31 03:54:20 crc kubenswrapper[4667]: I0131 03:54:20.172266 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c8cd6071-dafe-4be1-96dd-3c565ed4d67d-bound-sa-token\") pod \"image-registry-66df7c8f76-hhq8d\" (UID: \"c8cd6071-dafe-4be1-96dd-3c565ed4d67d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhq8d" Jan 31 03:54:20 crc kubenswrapper[4667]: I0131 03:54:20.177313 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4pd8\" (UniqueName: \"kubernetes.io/projected/c8cd6071-dafe-4be1-96dd-3c565ed4d67d-kube-api-access-p4pd8\") pod \"image-registry-66df7c8f76-hhq8d\" (UID: \"c8cd6071-dafe-4be1-96dd-3c565ed4d67d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhq8d" Jan 31 03:54:20 crc kubenswrapper[4667]: I0131 03:54:20.219517 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-hhq8d" Jan 31 03:54:20 crc kubenswrapper[4667]: I0131 03:54:20.422759 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-hhq8d"] Jan 31 03:54:20 crc kubenswrapper[4667]: W0131 03:54:20.435521 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8cd6071_dafe_4be1_96dd_3c565ed4d67d.slice/crio-08c700d80777b6f2ccebb35da50c1c40bc5105744c9b091e25cf8fcb501e71b9 WatchSource:0}: Error finding container 08c700d80777b6f2ccebb35da50c1c40bc5105744c9b091e25cf8fcb501e71b9: Status 404 returned error can't find the container with id 08c700d80777b6f2ccebb35da50c1c40bc5105744c9b091e25cf8fcb501e71b9 Jan 31 03:54:20 crc kubenswrapper[4667]: I0131 03:54:20.781706 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-hhq8d" event={"ID":"c8cd6071-dafe-4be1-96dd-3c565ed4d67d","Type":"ContainerStarted","Data":"986b67903c3f2c95283dfdb050cb9e260a805766dfaea5dbd88eafaf38df149f"} Jan 31 03:54:20 crc kubenswrapper[4667]: I0131 03:54:20.781767 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-hhq8d" event={"ID":"c8cd6071-dafe-4be1-96dd-3c565ed4d67d","Type":"ContainerStarted","Data":"08c700d80777b6f2ccebb35da50c1c40bc5105744c9b091e25cf8fcb501e71b9"} Jan 31 03:54:20 crc kubenswrapper[4667]: I0131 03:54:20.810052 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-hhq8d" podStartSLOduration=1.8100321780000002 podStartE2EDuration="1.810032178s" podCreationTimestamp="2026-01-31 03:54:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:54:20.808277201 +0000 UTC m=+384.324612530" watchObservedRunningTime="2026-01-31 03:54:20.810032178 +0000 UTC m=+384.326367477" Jan 31 03:54:21 crc kubenswrapper[4667]: I0131 03:54:21.787487 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-hhq8d" Jan 31 03:54:32 crc kubenswrapper[4667]: I0131 03:54:32.742374 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lh6xn"] Jan 31 03:54:32 crc kubenswrapper[4667]: I0131 03:54:32.746458 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lh6xn" podUID="6fc82b44-ef8d-4f7c-a022-fcbed68b1fab" containerName="registry-server" containerID="cri-o://1f0936dcdce432c420bef9d12c954ae33873440cfa4dbcf167acc707c076a5c4" gracePeriod=30 Jan 31 03:54:32 crc kubenswrapper[4667]: I0131 03:54:32.761116 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4lz7l"] Jan 31 03:54:32 crc kubenswrapper[4667]: I0131 03:54:32.763904 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4lz7l" podUID="7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6" containerName="registry-server" containerID="cri-o://650ff2bbf8345fc13923d8ca8414b911d79ba6f83c47772614c73467b9b8e9f8" gracePeriod=30 Jan 31 03:54:32 crc kubenswrapper[4667]: I0131 03:54:32.786545 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7pbrg"] Jan 31 03:54:32 crc kubenswrapper[4667]: I0131 03:54:32.787618 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-7pbrg" podUID="d426c096-b6d9-4696-8066-2b9ec75356af" containerName="marketplace-operator" containerID="cri-o://19612eb55c241bb25ef7596fc023d970256a4902beb8863cd5ef2446af5663e4" gracePeriod=30 Jan 31 03:54:32 crc kubenswrapper[4667]: I0131 03:54:32.794039 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gzpml"] Jan 31 03:54:32 crc kubenswrapper[4667]: I0131 03:54:32.794411 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gzpml" podUID="6d010741-ba9c-43b5-9dc3-87cb17d353d2" containerName="registry-server" containerID="cri-o://38f9f897055c327a1fd1b71b991eeb503e77ff9d41adadf4ca7ed4d646a588fa" gracePeriod=30 Jan 31 03:54:32 crc kubenswrapper[4667]: I0131 03:54:32.806799 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fz7n8"] Jan 31 03:54:32 crc kubenswrapper[4667]: I0131 03:54:32.807348 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fz7n8" podUID="b6b3a151-c2e9-4461-92c3-b7752926f08c" containerName="registry-server" containerID="cri-o://54d0edca032e18cf999c736223dd9c70d30a429057f87ed564c7bf6678056ea6" gracePeriod=30 Jan 31 03:54:32 crc kubenswrapper[4667]: I0131 03:54:32.814753 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cq68x"] Jan 31 03:54:32 crc kubenswrapper[4667]: I0131 03:54:32.815567 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cq68x" Jan 31 03:54:32 crc kubenswrapper[4667]: I0131 03:54:32.836219 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cq68x"] Jan 31 03:54:32 crc kubenswrapper[4667]: I0131 03:54:32.984004 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfssn\" (UniqueName: \"kubernetes.io/projected/eca662bd-5da4-45dd-9d55-714a74234cec-kube-api-access-vfssn\") pod \"marketplace-operator-79b997595-cq68x\" (UID: \"eca662bd-5da4-45dd-9d55-714a74234cec\") " pod="openshift-marketplace/marketplace-operator-79b997595-cq68x" Jan 31 03:54:32 crc kubenswrapper[4667]: I0131 03:54:32.984104 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/eca662bd-5da4-45dd-9d55-714a74234cec-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cq68x\" (UID: \"eca662bd-5da4-45dd-9d55-714a74234cec\") " pod="openshift-marketplace/marketplace-operator-79b997595-cq68x" Jan 31 03:54:32 crc kubenswrapper[4667]: I0131 03:54:32.984152 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eca662bd-5da4-45dd-9d55-714a74234cec-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cq68x\" (UID: \"eca662bd-5da4-45dd-9d55-714a74234cec\") " pod="openshift-marketplace/marketplace-operator-79b997595-cq68x" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.086563 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfssn\" (UniqueName: \"kubernetes.io/projected/eca662bd-5da4-45dd-9d55-714a74234cec-kube-api-access-vfssn\") pod \"marketplace-operator-79b997595-cq68x\" (UID: \"eca662bd-5da4-45dd-9d55-714a74234cec\") " pod="openshift-marketplace/marketplace-operator-79b997595-cq68x" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.086692 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/eca662bd-5da4-45dd-9d55-714a74234cec-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cq68x\" (UID: \"eca662bd-5da4-45dd-9d55-714a74234cec\") " pod="openshift-marketplace/marketplace-operator-79b997595-cq68x" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.086717 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eca662bd-5da4-45dd-9d55-714a74234cec-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cq68x\" (UID: \"eca662bd-5da4-45dd-9d55-714a74234cec\") " pod="openshift-marketplace/marketplace-operator-79b997595-cq68x" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.088291 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eca662bd-5da4-45dd-9d55-714a74234cec-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cq68x\" (UID: \"eca662bd-5da4-45dd-9d55-714a74234cec\") " pod="openshift-marketplace/marketplace-operator-79b997595-cq68x" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.097257 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/eca662bd-5da4-45dd-9d55-714a74234cec-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cq68x\" (UID: \"eca662bd-5da4-45dd-9d55-714a74234cec\") " pod="openshift-marketplace/marketplace-operator-79b997595-cq68x" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.104572 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfssn\" (UniqueName: \"kubernetes.io/projected/eca662bd-5da4-45dd-9d55-714a74234cec-kube-api-access-vfssn\") pod \"marketplace-operator-79b997595-cq68x\" (UID: \"eca662bd-5da4-45dd-9d55-714a74234cec\") " pod="openshift-marketplace/marketplace-operator-79b997595-cq68x" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.165356 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cq68x" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.327057 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4lz7l" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.493901 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6-catalog-content\") pod \"7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6\" (UID: \"7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6\") " Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.494128 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcx9j\" (UniqueName: \"kubernetes.io/projected/7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6-kube-api-access-dcx9j\") pod \"7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6\" (UID: \"7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6\") " Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.494314 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6-utilities\") pod \"7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6\" (UID: \"7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6\") " Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.495597 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6-utilities" (OuterVolumeSpecName: "utilities") pod "7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6" (UID: "7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.501148 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6-kube-api-access-dcx9j" (OuterVolumeSpecName: "kube-api-access-dcx9j") pod "7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6" (UID: "7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6"). InnerVolumeSpecName "kube-api-access-dcx9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.566795 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6" (UID: "7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.595226 4667 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.595288 4667 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.595304 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcx9j\" (UniqueName: \"kubernetes.io/projected/7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6-kube-api-access-dcx9j\") on node \"crc\" DevicePath \"\"" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.641658 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fz7n8" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.644434 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gzpml" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.663200 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7pbrg" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.677246 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lh6xn" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.800994 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fc82b44-ef8d-4f7c-a022-fcbed68b1fab-utilities\") pod \"6fc82b44-ef8d-4f7c-a022-fcbed68b1fab\" (UID: \"6fc82b44-ef8d-4f7c-a022-fcbed68b1fab\") " Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.801041 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6b3a151-c2e9-4461-92c3-b7752926f08c-utilities\") pod \"b6b3a151-c2e9-4461-92c3-b7752926f08c\" (UID: \"b6b3a151-c2e9-4461-92c3-b7752926f08c\") " Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.801072 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmrmj\" (UniqueName: \"kubernetes.io/projected/b6b3a151-c2e9-4461-92c3-b7752926f08c-kube-api-access-jmrmj\") pod \"b6b3a151-c2e9-4461-92c3-b7752926f08c\" (UID: \"b6b3a151-c2e9-4461-92c3-b7752926f08c\") " Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.801112 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d426c096-b6d9-4696-8066-2b9ec75356af-marketplace-operator-metrics\") pod \"d426c096-b6d9-4696-8066-2b9ec75356af\" (UID: \"d426c096-b6d9-4696-8066-2b9ec75356af\") " Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.801142 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8x69x\" (UniqueName: \"kubernetes.io/projected/6d010741-ba9c-43b5-9dc3-87cb17d353d2-kube-api-access-8x69x\") pod \"6d010741-ba9c-43b5-9dc3-87cb17d353d2\" (UID: \"6d010741-ba9c-43b5-9dc3-87cb17d353d2\") " Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.801182 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzjr5\" (UniqueName: \"kubernetes.io/projected/6fc82b44-ef8d-4f7c-a022-fcbed68b1fab-kube-api-access-xzjr5\") pod \"6fc82b44-ef8d-4f7c-a022-fcbed68b1fab\" (UID: \"6fc82b44-ef8d-4f7c-a022-fcbed68b1fab\") " Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.801210 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fc82b44-ef8d-4f7c-a022-fcbed68b1fab-catalog-content\") pod \"6fc82b44-ef8d-4f7c-a022-fcbed68b1fab\" (UID: \"6fc82b44-ef8d-4f7c-a022-fcbed68b1fab\") " Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.801249 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d010741-ba9c-43b5-9dc3-87cb17d353d2-catalog-content\") pod \"6d010741-ba9c-43b5-9dc3-87cb17d353d2\" (UID: \"6d010741-ba9c-43b5-9dc3-87cb17d353d2\") " Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.801290 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-527dl\" (UniqueName: \"kubernetes.io/projected/d426c096-b6d9-4696-8066-2b9ec75356af-kube-api-access-527dl\") pod \"d426c096-b6d9-4696-8066-2b9ec75356af\" (UID: \"d426c096-b6d9-4696-8066-2b9ec75356af\") " Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.802027 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6b3a151-c2e9-4461-92c3-b7752926f08c-utilities" (OuterVolumeSpecName: "utilities") pod "b6b3a151-c2e9-4461-92c3-b7752926f08c" (UID: "b6b3a151-c2e9-4461-92c3-b7752926f08c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.802071 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fc82b44-ef8d-4f7c-a022-fcbed68b1fab-utilities" (OuterVolumeSpecName: "utilities") pod "6fc82b44-ef8d-4f7c-a022-fcbed68b1fab" (UID: "6fc82b44-ef8d-4f7c-a022-fcbed68b1fab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.826122 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6b3a151-c2e9-4461-92c3-b7752926f08c-catalog-content\") pod \"b6b3a151-c2e9-4461-92c3-b7752926f08c\" (UID: \"b6b3a151-c2e9-4461-92c3-b7752926f08c\") " Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.826294 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d426c096-b6d9-4696-8066-2b9ec75356af-marketplace-trusted-ca\") pod \"d426c096-b6d9-4696-8066-2b9ec75356af\" (UID: \"d426c096-b6d9-4696-8066-2b9ec75356af\") " Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.826340 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d010741-ba9c-43b5-9dc3-87cb17d353d2-utilities\") pod \"6d010741-ba9c-43b5-9dc3-87cb17d353d2\" (UID: \"6d010741-ba9c-43b5-9dc3-87cb17d353d2\") " Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.827519 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d010741-ba9c-43b5-9dc3-87cb17d353d2-utilities" (OuterVolumeSpecName: "utilities") pod "6d010741-ba9c-43b5-9dc3-87cb17d353d2" (UID: "6d010741-ba9c-43b5-9dc3-87cb17d353d2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.832963 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cq68x"] Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.833139 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d426c096-b6d9-4696-8066-2b9ec75356af-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "d426c096-b6d9-4696-8066-2b9ec75356af" (UID: "d426c096-b6d9-4696-8066-2b9ec75356af"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.835743 4667 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d010741-ba9c-43b5-9dc3-87cb17d353d2-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.835771 4667 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fc82b44-ef8d-4f7c-a022-fcbed68b1fab-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.835785 4667 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6b3a151-c2e9-4461-92c3-b7752926f08c-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.835807 4667 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d426c096-b6d9-4696-8066-2b9ec75356af-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.841319 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6b3a151-c2e9-4461-92c3-b7752926f08c-kube-api-access-jmrmj" (OuterVolumeSpecName: "kube-api-access-jmrmj") pod "b6b3a151-c2e9-4461-92c3-b7752926f08c" (UID: "b6b3a151-c2e9-4461-92c3-b7752926f08c"). InnerVolumeSpecName "kube-api-access-jmrmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.841489 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d010741-ba9c-43b5-9dc3-87cb17d353d2-kube-api-access-8x69x" (OuterVolumeSpecName: "kube-api-access-8x69x") pod "6d010741-ba9c-43b5-9dc3-87cb17d353d2" (UID: "6d010741-ba9c-43b5-9dc3-87cb17d353d2"). InnerVolumeSpecName "kube-api-access-8x69x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.844580 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d426c096-b6d9-4696-8066-2b9ec75356af-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "d426c096-b6d9-4696-8066-2b9ec75356af" (UID: "d426c096-b6d9-4696-8066-2b9ec75356af"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.845801 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fc82b44-ef8d-4f7c-a022-fcbed68b1fab-kube-api-access-xzjr5" (OuterVolumeSpecName: "kube-api-access-xzjr5") pod "6fc82b44-ef8d-4f7c-a022-fcbed68b1fab" (UID: "6fc82b44-ef8d-4f7c-a022-fcbed68b1fab"). InnerVolumeSpecName "kube-api-access-xzjr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.846292 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d426c096-b6d9-4696-8066-2b9ec75356af-kube-api-access-527dl" (OuterVolumeSpecName: "kube-api-access-527dl") pod "d426c096-b6d9-4696-8066-2b9ec75356af" (UID: "d426c096-b6d9-4696-8066-2b9ec75356af"). InnerVolumeSpecName "kube-api-access-527dl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.875562 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d010741-ba9c-43b5-9dc3-87cb17d353d2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d010741-ba9c-43b5-9dc3-87cb17d353d2" (UID: "6d010741-ba9c-43b5-9dc3-87cb17d353d2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.877347 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cq68x" event={"ID":"eca662bd-5da4-45dd-9d55-714a74234cec","Type":"ContainerStarted","Data":"628f634bd9a7a8149504258ada19522a3ac104bb8a3352373d8b957e9ab337e4"} Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.881230 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fc82b44-ef8d-4f7c-a022-fcbed68b1fab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6fc82b44-ef8d-4f7c-a022-fcbed68b1fab" (UID: "6fc82b44-ef8d-4f7c-a022-fcbed68b1fab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.883515 4667 generic.go:334] "Generic (PLEG): container finished" podID="d426c096-b6d9-4696-8066-2b9ec75356af" containerID="19612eb55c241bb25ef7596fc023d970256a4902beb8863cd5ef2446af5663e4" exitCode=0 Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.883573 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7pbrg" event={"ID":"d426c096-b6d9-4696-8066-2b9ec75356af","Type":"ContainerDied","Data":"19612eb55c241bb25ef7596fc023d970256a4902beb8863cd5ef2446af5663e4"} Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.883601 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7pbrg" event={"ID":"d426c096-b6d9-4696-8066-2b9ec75356af","Type":"ContainerDied","Data":"5ef4cf80add0ba8ce141d7ebc9a980bca9437d86d568ba96f6a0cf0d62a4c2b1"} Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.883622 4667 scope.go:117] "RemoveContainer" containerID="19612eb55c241bb25ef7596fc023d970256a4902beb8863cd5ef2446af5663e4" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.883767 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7pbrg" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.892173 4667 generic.go:334] "Generic (PLEG): container finished" podID="b6b3a151-c2e9-4461-92c3-b7752926f08c" containerID="54d0edca032e18cf999c736223dd9c70d30a429057f87ed564c7bf6678056ea6" exitCode=0 Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.892237 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fz7n8" event={"ID":"b6b3a151-c2e9-4461-92c3-b7752926f08c","Type":"ContainerDied","Data":"54d0edca032e18cf999c736223dd9c70d30a429057f87ed564c7bf6678056ea6"} Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.892266 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fz7n8" event={"ID":"b6b3a151-c2e9-4461-92c3-b7752926f08c","Type":"ContainerDied","Data":"87c555801fefda758a455f84fb9013f9697af801af3b077d2f58d9144a57740c"} Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.892348 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fz7n8" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.905449 4667 generic.go:334] "Generic (PLEG): container finished" podID="6fc82b44-ef8d-4f7c-a022-fcbed68b1fab" containerID="1f0936dcdce432c420bef9d12c954ae33873440cfa4dbcf167acc707c076a5c4" exitCode=0 Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.905556 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lh6xn" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.905565 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lh6xn" event={"ID":"6fc82b44-ef8d-4f7c-a022-fcbed68b1fab","Type":"ContainerDied","Data":"1f0936dcdce432c420bef9d12c954ae33873440cfa4dbcf167acc707c076a5c4"} Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.905632 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lh6xn" event={"ID":"6fc82b44-ef8d-4f7c-a022-fcbed68b1fab","Type":"ContainerDied","Data":"28e06d197924f8c794373cadcb36738993da210245c2ee08312c95c40287617b"} Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.910636 4667 generic.go:334] "Generic (PLEG): container finished" podID="6d010741-ba9c-43b5-9dc3-87cb17d353d2" containerID="38f9f897055c327a1fd1b71b991eeb503e77ff9d41adadf4ca7ed4d646a588fa" exitCode=0 Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.910691 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gzpml" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.910793 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gzpml" event={"ID":"6d010741-ba9c-43b5-9dc3-87cb17d353d2","Type":"ContainerDied","Data":"38f9f897055c327a1fd1b71b991eeb503e77ff9d41adadf4ca7ed4d646a588fa"} Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.910832 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gzpml" event={"ID":"6d010741-ba9c-43b5-9dc3-87cb17d353d2","Type":"ContainerDied","Data":"dec63d8b1db1a1692512f25ff1e018cd9d79140cc703c764f386765238354a75"} Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.917376 4667 generic.go:334] "Generic (PLEG): container finished" podID="7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6" containerID="650ff2bbf8345fc13923d8ca8414b911d79ba6f83c47772614c73467b9b8e9f8" exitCode=0 Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.917422 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4lz7l" event={"ID":"7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6","Type":"ContainerDied","Data":"650ff2bbf8345fc13923d8ca8414b911d79ba6f83c47772614c73467b9b8e9f8"} Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.917458 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4lz7l" event={"ID":"7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6","Type":"ContainerDied","Data":"ac7ce3b43a6a117a4342e3655788aa6b7375e558555f753c127fd65f56f468fd"} Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.917599 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4lz7l" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.938228 4667 scope.go:117] "RemoveContainer" containerID="e94d4e8ba1eb248eeb1951f7ff5f115f37d3247fe9c66b1b88da95e50014b0ec" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.940806 4667 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d010741-ba9c-43b5-9dc3-87cb17d353d2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.940830 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-527dl\" (UniqueName: \"kubernetes.io/projected/d426c096-b6d9-4696-8066-2b9ec75356af-kube-api-access-527dl\") on node \"crc\" DevicePath \"\"" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.940858 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmrmj\" (UniqueName: \"kubernetes.io/projected/b6b3a151-c2e9-4461-92c3-b7752926f08c-kube-api-access-jmrmj\") on node \"crc\" DevicePath \"\"" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.940874 4667 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d426c096-b6d9-4696-8066-2b9ec75356af-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.940886 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8x69x\" (UniqueName: \"kubernetes.io/projected/6d010741-ba9c-43b5-9dc3-87cb17d353d2-kube-api-access-8x69x\") on node \"crc\" DevicePath \"\"" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.940898 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzjr5\" (UniqueName: \"kubernetes.io/projected/6fc82b44-ef8d-4f7c-a022-fcbed68b1fab-kube-api-access-xzjr5\") on node \"crc\" DevicePath \"\"" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.940911 4667 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fc82b44-ef8d-4f7c-a022-fcbed68b1fab-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.990151 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7pbrg"] Jan 31 03:54:33 crc kubenswrapper[4667]: I0131 03:54:33.999605 4667 scope.go:117] "RemoveContainer" containerID="19612eb55c241bb25ef7596fc023d970256a4902beb8863cd5ef2446af5663e4" Jan 31 03:54:34 crc kubenswrapper[4667]: E0131 03:54:34.000262 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19612eb55c241bb25ef7596fc023d970256a4902beb8863cd5ef2446af5663e4\": container with ID starting with 19612eb55c241bb25ef7596fc023d970256a4902beb8863cd5ef2446af5663e4 not found: ID does not exist" containerID="19612eb55c241bb25ef7596fc023d970256a4902beb8863cd5ef2446af5663e4" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.000430 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19612eb55c241bb25ef7596fc023d970256a4902beb8863cd5ef2446af5663e4"} err="failed to get container status \"19612eb55c241bb25ef7596fc023d970256a4902beb8863cd5ef2446af5663e4\": rpc error: code = NotFound desc = could not find container \"19612eb55c241bb25ef7596fc023d970256a4902beb8863cd5ef2446af5663e4\": container with ID starting with 19612eb55c241bb25ef7596fc023d970256a4902beb8863cd5ef2446af5663e4 not found: ID does not exist" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.000561 4667 scope.go:117] "RemoveContainer" containerID="e94d4e8ba1eb248eeb1951f7ff5f115f37d3247fe9c66b1b88da95e50014b0ec" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.002400 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7pbrg"] Jan 31 03:54:34 crc kubenswrapper[4667]: E0131 03:54:34.003928 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e94d4e8ba1eb248eeb1951f7ff5f115f37d3247fe9c66b1b88da95e50014b0ec\": container with ID starting with e94d4e8ba1eb248eeb1951f7ff5f115f37d3247fe9c66b1b88da95e50014b0ec not found: ID does not exist" containerID="e94d4e8ba1eb248eeb1951f7ff5f115f37d3247fe9c66b1b88da95e50014b0ec" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.003982 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e94d4e8ba1eb248eeb1951f7ff5f115f37d3247fe9c66b1b88da95e50014b0ec"} err="failed to get container status \"e94d4e8ba1eb248eeb1951f7ff5f115f37d3247fe9c66b1b88da95e50014b0ec\": rpc error: code = NotFound desc = could not find container \"e94d4e8ba1eb248eeb1951f7ff5f115f37d3247fe9c66b1b88da95e50014b0ec\": container with ID starting with e94d4e8ba1eb248eeb1951f7ff5f115f37d3247fe9c66b1b88da95e50014b0ec not found: ID does not exist" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.004007 4667 scope.go:117] "RemoveContainer" containerID="54d0edca032e18cf999c736223dd9c70d30a429057f87ed564c7bf6678056ea6" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.016060 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6b3a151-c2e9-4461-92c3-b7752926f08c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b6b3a151-c2e9-4461-92c3-b7752926f08c" (UID: "b6b3a151-c2e9-4461-92c3-b7752926f08c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.026010 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lh6xn"] Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.026303 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lh6xn"] Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.026372 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4lz7l"] Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.029641 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4lz7l"] Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.042281 4667 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6b3a151-c2e9-4461-92c3-b7752926f08c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.043087 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gzpml"] Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.046012 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gzpml"] Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.047290 4667 scope.go:117] "RemoveContainer" containerID="05af988befd32ddd2d7e605a49a92b22e48eb87342711e074dd4a3092dfd5470" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.078071 4667 scope.go:117] "RemoveContainer" containerID="89556686eaa50729596db2202c0955c41b5e9aeff83750d55288f27d240e2957" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.100170 4667 scope.go:117] "RemoveContainer" containerID="54d0edca032e18cf999c736223dd9c70d30a429057f87ed564c7bf6678056ea6" Jan 31 03:54:34 crc kubenswrapper[4667]: E0131 03:54:34.100652 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54d0edca032e18cf999c736223dd9c70d30a429057f87ed564c7bf6678056ea6\": container with ID starting with 54d0edca032e18cf999c736223dd9c70d30a429057f87ed564c7bf6678056ea6 not found: ID does not exist" containerID="54d0edca032e18cf999c736223dd9c70d30a429057f87ed564c7bf6678056ea6" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.100687 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54d0edca032e18cf999c736223dd9c70d30a429057f87ed564c7bf6678056ea6"} err="failed to get container status \"54d0edca032e18cf999c736223dd9c70d30a429057f87ed564c7bf6678056ea6\": rpc error: code = NotFound desc = could not find container \"54d0edca032e18cf999c736223dd9c70d30a429057f87ed564c7bf6678056ea6\": container with ID starting with 54d0edca032e18cf999c736223dd9c70d30a429057f87ed564c7bf6678056ea6 not found: ID does not exist" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.100711 4667 scope.go:117] "RemoveContainer" containerID="05af988befd32ddd2d7e605a49a92b22e48eb87342711e074dd4a3092dfd5470" Jan 31 03:54:34 crc kubenswrapper[4667]: E0131 03:54:34.101175 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05af988befd32ddd2d7e605a49a92b22e48eb87342711e074dd4a3092dfd5470\": container with ID starting with 05af988befd32ddd2d7e605a49a92b22e48eb87342711e074dd4a3092dfd5470 not found: ID does not exist" containerID="05af988befd32ddd2d7e605a49a92b22e48eb87342711e074dd4a3092dfd5470" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.101225 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05af988befd32ddd2d7e605a49a92b22e48eb87342711e074dd4a3092dfd5470"} err="failed to get container status \"05af988befd32ddd2d7e605a49a92b22e48eb87342711e074dd4a3092dfd5470\": rpc error: code = NotFound desc = could not find container \"05af988befd32ddd2d7e605a49a92b22e48eb87342711e074dd4a3092dfd5470\": container with ID starting with 05af988befd32ddd2d7e605a49a92b22e48eb87342711e074dd4a3092dfd5470 not found: ID does not exist" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.101258 4667 scope.go:117] "RemoveContainer" containerID="89556686eaa50729596db2202c0955c41b5e9aeff83750d55288f27d240e2957" Jan 31 03:54:34 crc kubenswrapper[4667]: E0131 03:54:34.101832 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89556686eaa50729596db2202c0955c41b5e9aeff83750d55288f27d240e2957\": container with ID starting with 89556686eaa50729596db2202c0955c41b5e9aeff83750d55288f27d240e2957 not found: ID does not exist" containerID="89556686eaa50729596db2202c0955c41b5e9aeff83750d55288f27d240e2957" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.101886 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89556686eaa50729596db2202c0955c41b5e9aeff83750d55288f27d240e2957"} err="failed to get container status \"89556686eaa50729596db2202c0955c41b5e9aeff83750d55288f27d240e2957\": rpc error: code = NotFound desc = could not find container \"89556686eaa50729596db2202c0955c41b5e9aeff83750d55288f27d240e2957\": container with ID starting with 89556686eaa50729596db2202c0955c41b5e9aeff83750d55288f27d240e2957 not found: ID does not exist" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.101904 4667 scope.go:117] "RemoveContainer" containerID="1f0936dcdce432c420bef9d12c954ae33873440cfa4dbcf167acc707c076a5c4" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.119290 4667 scope.go:117] "RemoveContainer" containerID="e9c48e757c90bab7963af4b346bf0660d686bad06c1194af33e61d7af349f3f0" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.139603 4667 scope.go:117] "RemoveContainer" containerID="e270008bbd46a4fc749202dcbb26e58d0db6ef8adff777b0af34a1308a3d3ce5" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.156018 4667 scope.go:117] "RemoveContainer" containerID="1f0936dcdce432c420bef9d12c954ae33873440cfa4dbcf167acc707c076a5c4" Jan 31 03:54:34 crc kubenswrapper[4667]: E0131 03:54:34.157422 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f0936dcdce432c420bef9d12c954ae33873440cfa4dbcf167acc707c076a5c4\": container with ID starting with 1f0936dcdce432c420bef9d12c954ae33873440cfa4dbcf167acc707c076a5c4 not found: ID does not exist" containerID="1f0936dcdce432c420bef9d12c954ae33873440cfa4dbcf167acc707c076a5c4" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.157458 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f0936dcdce432c420bef9d12c954ae33873440cfa4dbcf167acc707c076a5c4"} err="failed to get container status \"1f0936dcdce432c420bef9d12c954ae33873440cfa4dbcf167acc707c076a5c4\": rpc error: code = NotFound desc = could not find container \"1f0936dcdce432c420bef9d12c954ae33873440cfa4dbcf167acc707c076a5c4\": container with ID starting with 1f0936dcdce432c420bef9d12c954ae33873440cfa4dbcf167acc707c076a5c4 not found: ID does not exist" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.157490 4667 scope.go:117] "RemoveContainer" containerID="e9c48e757c90bab7963af4b346bf0660d686bad06c1194af33e61d7af349f3f0" Jan 31 03:54:34 crc kubenswrapper[4667]: E0131 03:54:34.159035 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9c48e757c90bab7963af4b346bf0660d686bad06c1194af33e61d7af349f3f0\": container with ID starting with e9c48e757c90bab7963af4b346bf0660d686bad06c1194af33e61d7af349f3f0 not found: ID does not exist" containerID="e9c48e757c90bab7963af4b346bf0660d686bad06c1194af33e61d7af349f3f0" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.159067 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9c48e757c90bab7963af4b346bf0660d686bad06c1194af33e61d7af349f3f0"} err="failed to get container status \"e9c48e757c90bab7963af4b346bf0660d686bad06c1194af33e61d7af349f3f0\": rpc error: code = NotFound desc = could not find container \"e9c48e757c90bab7963af4b346bf0660d686bad06c1194af33e61d7af349f3f0\": container with ID starting with e9c48e757c90bab7963af4b346bf0660d686bad06c1194af33e61d7af349f3f0 not found: ID does not exist" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.159091 4667 scope.go:117] "RemoveContainer" containerID="e270008bbd46a4fc749202dcbb26e58d0db6ef8adff777b0af34a1308a3d3ce5" Jan 31 03:54:34 crc kubenswrapper[4667]: E0131 03:54:34.159540 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e270008bbd46a4fc749202dcbb26e58d0db6ef8adff777b0af34a1308a3d3ce5\": container with ID starting with e270008bbd46a4fc749202dcbb26e58d0db6ef8adff777b0af34a1308a3d3ce5 not found: ID does not exist" containerID="e270008bbd46a4fc749202dcbb26e58d0db6ef8adff777b0af34a1308a3d3ce5" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.159579 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e270008bbd46a4fc749202dcbb26e58d0db6ef8adff777b0af34a1308a3d3ce5"} err="failed to get container status \"e270008bbd46a4fc749202dcbb26e58d0db6ef8adff777b0af34a1308a3d3ce5\": rpc error: code = NotFound desc = could not find container \"e270008bbd46a4fc749202dcbb26e58d0db6ef8adff777b0af34a1308a3d3ce5\": container with ID starting with e270008bbd46a4fc749202dcbb26e58d0db6ef8adff777b0af34a1308a3d3ce5 not found: ID does not exist" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.159607 4667 scope.go:117] "RemoveContainer" containerID="38f9f897055c327a1fd1b71b991eeb503e77ff9d41adadf4ca7ed4d646a588fa" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.174149 4667 scope.go:117] "RemoveContainer" containerID="6b0a465ff042689a202f61d53731ecac8336dee14fb3097603470023e1a65e7f" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.189417 4667 scope.go:117] "RemoveContainer" containerID="df2a65378bc1f97e5ceba68199aaeaeb15a854bac449b7f6823a0b56d1cc2a0f" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.203707 4667 scope.go:117] "RemoveContainer" containerID="38f9f897055c327a1fd1b71b991eeb503e77ff9d41adadf4ca7ed4d646a588fa" Jan 31 03:54:34 crc kubenswrapper[4667]: E0131 03:54:34.205472 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38f9f897055c327a1fd1b71b991eeb503e77ff9d41adadf4ca7ed4d646a588fa\": container with ID starting with 38f9f897055c327a1fd1b71b991eeb503e77ff9d41adadf4ca7ed4d646a588fa not found: ID does not exist" containerID="38f9f897055c327a1fd1b71b991eeb503e77ff9d41adadf4ca7ed4d646a588fa" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.205524 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38f9f897055c327a1fd1b71b991eeb503e77ff9d41adadf4ca7ed4d646a588fa"} err="failed to get container status \"38f9f897055c327a1fd1b71b991eeb503e77ff9d41adadf4ca7ed4d646a588fa\": rpc error: code = NotFound desc = could not find container \"38f9f897055c327a1fd1b71b991eeb503e77ff9d41adadf4ca7ed4d646a588fa\": container with ID starting with 38f9f897055c327a1fd1b71b991eeb503e77ff9d41adadf4ca7ed4d646a588fa not found: ID does not exist" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.205556 4667 scope.go:117] "RemoveContainer" containerID="6b0a465ff042689a202f61d53731ecac8336dee14fb3097603470023e1a65e7f" Jan 31 03:54:34 crc kubenswrapper[4667]: E0131 03:54:34.206679 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b0a465ff042689a202f61d53731ecac8336dee14fb3097603470023e1a65e7f\": container with ID starting with 6b0a465ff042689a202f61d53731ecac8336dee14fb3097603470023e1a65e7f not found: ID does not exist" containerID="6b0a465ff042689a202f61d53731ecac8336dee14fb3097603470023e1a65e7f" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.206721 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b0a465ff042689a202f61d53731ecac8336dee14fb3097603470023e1a65e7f"} err="failed to get container status \"6b0a465ff042689a202f61d53731ecac8336dee14fb3097603470023e1a65e7f\": rpc error: code = NotFound desc = could not find container \"6b0a465ff042689a202f61d53731ecac8336dee14fb3097603470023e1a65e7f\": container with ID starting with 6b0a465ff042689a202f61d53731ecac8336dee14fb3097603470023e1a65e7f not found: ID does not exist" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.206770 4667 scope.go:117] "RemoveContainer" containerID="df2a65378bc1f97e5ceba68199aaeaeb15a854bac449b7f6823a0b56d1cc2a0f" Jan 31 03:54:34 crc kubenswrapper[4667]: E0131 03:54:34.207096 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df2a65378bc1f97e5ceba68199aaeaeb15a854bac449b7f6823a0b56d1cc2a0f\": container with ID starting with df2a65378bc1f97e5ceba68199aaeaeb15a854bac449b7f6823a0b56d1cc2a0f not found: ID does not exist" containerID="df2a65378bc1f97e5ceba68199aaeaeb15a854bac449b7f6823a0b56d1cc2a0f" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.207118 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df2a65378bc1f97e5ceba68199aaeaeb15a854bac449b7f6823a0b56d1cc2a0f"} err="failed to get container status \"df2a65378bc1f97e5ceba68199aaeaeb15a854bac449b7f6823a0b56d1cc2a0f\": rpc error: code = NotFound desc = could not find container \"df2a65378bc1f97e5ceba68199aaeaeb15a854bac449b7f6823a0b56d1cc2a0f\": container with ID starting with df2a65378bc1f97e5ceba68199aaeaeb15a854bac449b7f6823a0b56d1cc2a0f not found: ID does not exist" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.207132 4667 scope.go:117] "RemoveContainer" containerID="650ff2bbf8345fc13923d8ca8414b911d79ba6f83c47772614c73467b9b8e9f8" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.221397 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fz7n8"] Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.227759 4667 scope.go:117] "RemoveContainer" containerID="9386d92fbf07db9f75520236fbfd77c24238c6123c6a8945dade4a4478cd2dd0" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.228685 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fz7n8"] Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.261975 4667 scope.go:117] "RemoveContainer" containerID="12f75f520a303c4544c3e3e96588ce998de47980ec0e0ead1048d5f554b3eb08" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.278113 4667 scope.go:117] "RemoveContainer" containerID="650ff2bbf8345fc13923d8ca8414b911d79ba6f83c47772614c73467b9b8e9f8" Jan 31 03:54:34 crc kubenswrapper[4667]: E0131 03:54:34.279513 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"650ff2bbf8345fc13923d8ca8414b911d79ba6f83c47772614c73467b9b8e9f8\": container with ID starting with 650ff2bbf8345fc13923d8ca8414b911d79ba6f83c47772614c73467b9b8e9f8 not found: ID does not exist" containerID="650ff2bbf8345fc13923d8ca8414b911d79ba6f83c47772614c73467b9b8e9f8" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.279870 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"650ff2bbf8345fc13923d8ca8414b911d79ba6f83c47772614c73467b9b8e9f8"} err="failed to get container status \"650ff2bbf8345fc13923d8ca8414b911d79ba6f83c47772614c73467b9b8e9f8\": rpc error: code = NotFound desc = could not find container \"650ff2bbf8345fc13923d8ca8414b911d79ba6f83c47772614c73467b9b8e9f8\": container with ID starting with 650ff2bbf8345fc13923d8ca8414b911d79ba6f83c47772614c73467b9b8e9f8 not found: ID does not exist" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.279908 4667 scope.go:117] "RemoveContainer" containerID="9386d92fbf07db9f75520236fbfd77c24238c6123c6a8945dade4a4478cd2dd0" Jan 31 03:54:34 crc kubenswrapper[4667]: E0131 03:54:34.280522 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9386d92fbf07db9f75520236fbfd77c24238c6123c6a8945dade4a4478cd2dd0\": container with ID starting with 9386d92fbf07db9f75520236fbfd77c24238c6123c6a8945dade4a4478cd2dd0 not found: ID does not exist" containerID="9386d92fbf07db9f75520236fbfd77c24238c6123c6a8945dade4a4478cd2dd0" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.280589 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9386d92fbf07db9f75520236fbfd77c24238c6123c6a8945dade4a4478cd2dd0"} err="failed to get container status \"9386d92fbf07db9f75520236fbfd77c24238c6123c6a8945dade4a4478cd2dd0\": rpc error: code = NotFound desc = could not find container \"9386d92fbf07db9f75520236fbfd77c24238c6123c6a8945dade4a4478cd2dd0\": container with ID starting with 9386d92fbf07db9f75520236fbfd77c24238c6123c6a8945dade4a4478cd2dd0 not found: ID does not exist" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.280632 4667 scope.go:117] "RemoveContainer" containerID="12f75f520a303c4544c3e3e96588ce998de47980ec0e0ead1048d5f554b3eb08" Jan 31 03:54:34 crc kubenswrapper[4667]: E0131 03:54:34.281456 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12f75f520a303c4544c3e3e96588ce998de47980ec0e0ead1048d5f554b3eb08\": container with ID starting with 12f75f520a303c4544c3e3e96588ce998de47980ec0e0ead1048d5f554b3eb08 not found: ID does not exist" containerID="12f75f520a303c4544c3e3e96588ce998de47980ec0e0ead1048d5f554b3eb08" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.281494 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12f75f520a303c4544c3e3e96588ce998de47980ec0e0ead1048d5f554b3eb08"} err="failed to get container status \"12f75f520a303c4544c3e3e96588ce998de47980ec0e0ead1048d5f554b3eb08\": rpc error: code = NotFound desc = could not find container \"12f75f520a303c4544c3e3e96588ce998de47980ec0e0ead1048d5f554b3eb08\": container with ID starting with 12f75f520a303c4544c3e3e96588ce998de47980ec0e0ead1048d5f554b3eb08 not found: ID does not exist" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.933558 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cq68x" event={"ID":"eca662bd-5da4-45dd-9d55-714a74234cec","Type":"ContainerStarted","Data":"e5b0b1bf2abdc6c661424e29204ccc9718bf4a7a05263f1ccde0c23a651d54ec"} Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.936017 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-cq68x" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.940922 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-cq68x" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.952032 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-cq68x" podStartSLOduration=2.952016784 podStartE2EDuration="2.952016784s" podCreationTimestamp="2026-01-31 03:54:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:54:34.950936615 +0000 UTC m=+398.467271914" watchObservedRunningTime="2026-01-31 03:54:34.952016784 +0000 UTC m=+398.468352083" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.966172 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4nf68"] Jan 31 03:54:34 crc kubenswrapper[4667]: E0131 03:54:34.966373 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d010741-ba9c-43b5-9dc3-87cb17d353d2" containerName="extract-utilities" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.966385 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d010741-ba9c-43b5-9dc3-87cb17d353d2" containerName="extract-utilities" Jan 31 03:54:34 crc kubenswrapper[4667]: E0131 03:54:34.966396 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d426c096-b6d9-4696-8066-2b9ec75356af" containerName="marketplace-operator" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.966402 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="d426c096-b6d9-4696-8066-2b9ec75356af" containerName="marketplace-operator" Jan 31 03:54:34 crc kubenswrapper[4667]: E0131 03:54:34.966411 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fc82b44-ef8d-4f7c-a022-fcbed68b1fab" containerName="extract-content" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.966418 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fc82b44-ef8d-4f7c-a022-fcbed68b1fab" containerName="extract-content" Jan 31 03:54:34 crc kubenswrapper[4667]: E0131 03:54:34.966426 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6b3a151-c2e9-4461-92c3-b7752926f08c" containerName="extract-utilities" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.966432 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6b3a151-c2e9-4461-92c3-b7752926f08c" containerName="extract-utilities" Jan 31 03:54:34 crc kubenswrapper[4667]: E0131 03:54:34.966438 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6b3a151-c2e9-4461-92c3-b7752926f08c" containerName="extract-content" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.966445 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6b3a151-c2e9-4461-92c3-b7752926f08c" containerName="extract-content" Jan 31 03:54:34 crc kubenswrapper[4667]: E0131 03:54:34.966452 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6" containerName="registry-server" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.966458 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6" containerName="registry-server" Jan 31 03:54:34 crc kubenswrapper[4667]: E0131 03:54:34.966468 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fc82b44-ef8d-4f7c-a022-fcbed68b1fab" containerName="extract-utilities" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.966475 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fc82b44-ef8d-4f7c-a022-fcbed68b1fab" containerName="extract-utilities" Jan 31 03:54:34 crc kubenswrapper[4667]: E0131 03:54:34.966483 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6b3a151-c2e9-4461-92c3-b7752926f08c" containerName="registry-server" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.966489 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6b3a151-c2e9-4461-92c3-b7752926f08c" containerName="registry-server" Jan 31 03:54:34 crc kubenswrapper[4667]: E0131 03:54:34.966501 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6" containerName="extract-utilities" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.966507 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6" containerName="extract-utilities" Jan 31 03:54:34 crc kubenswrapper[4667]: E0131 03:54:34.966513 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fc82b44-ef8d-4f7c-a022-fcbed68b1fab" containerName="registry-server" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.966519 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fc82b44-ef8d-4f7c-a022-fcbed68b1fab" containerName="registry-server" Jan 31 03:54:34 crc kubenswrapper[4667]: E0131 03:54:34.966528 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d010741-ba9c-43b5-9dc3-87cb17d353d2" containerName="registry-server" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.966533 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d010741-ba9c-43b5-9dc3-87cb17d353d2" containerName="registry-server" Jan 31 03:54:34 crc kubenswrapper[4667]: E0131 03:54:34.966542 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6" containerName="extract-content" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.966548 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6" containerName="extract-content" Jan 31 03:54:34 crc kubenswrapper[4667]: E0131 03:54:34.966557 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d010741-ba9c-43b5-9dc3-87cb17d353d2" containerName="extract-content" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.966563 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d010741-ba9c-43b5-9dc3-87cb17d353d2" containerName="extract-content" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.966647 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d010741-ba9c-43b5-9dc3-87cb17d353d2" containerName="registry-server" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.966661 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="d426c096-b6d9-4696-8066-2b9ec75356af" containerName="marketplace-operator" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.966670 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6b3a151-c2e9-4461-92c3-b7752926f08c" containerName="registry-server" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.966677 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fc82b44-ef8d-4f7c-a022-fcbed68b1fab" containerName="registry-server" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.966684 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6" containerName="registry-server" Jan 31 03:54:34 crc kubenswrapper[4667]: E0131 03:54:34.966780 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d426c096-b6d9-4696-8066-2b9ec75356af" containerName="marketplace-operator" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.966787 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="d426c096-b6d9-4696-8066-2b9ec75356af" containerName="marketplace-operator" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.966875 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="d426c096-b6d9-4696-8066-2b9ec75356af" containerName="marketplace-operator" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.967435 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4nf68" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.969648 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 31 03:54:34 crc kubenswrapper[4667]: I0131 03:54:34.978010 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4nf68"] Jan 31 03:54:35 crc kubenswrapper[4667]: I0131 03:54:35.056408 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88ba3bd3-095c-4d75-b2c7-fa72d74704ef-catalog-content\") pod \"redhat-marketplace-4nf68\" (UID: \"88ba3bd3-095c-4d75-b2c7-fa72d74704ef\") " pod="openshift-marketplace/redhat-marketplace-4nf68" Jan 31 03:54:35 crc kubenswrapper[4667]: I0131 03:54:35.056529 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88ba3bd3-095c-4d75-b2c7-fa72d74704ef-utilities\") pod \"redhat-marketplace-4nf68\" (UID: \"88ba3bd3-095c-4d75-b2c7-fa72d74704ef\") " pod="openshift-marketplace/redhat-marketplace-4nf68" Jan 31 03:54:35 crc kubenswrapper[4667]: I0131 03:54:35.056595 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5krl\" (UniqueName: \"kubernetes.io/projected/88ba3bd3-095c-4d75-b2c7-fa72d74704ef-kube-api-access-k5krl\") pod \"redhat-marketplace-4nf68\" (UID: \"88ba3bd3-095c-4d75-b2c7-fa72d74704ef\") " pod="openshift-marketplace/redhat-marketplace-4nf68" Jan 31 03:54:35 crc kubenswrapper[4667]: I0131 03:54:35.158275 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5krl\" (UniqueName: \"kubernetes.io/projected/88ba3bd3-095c-4d75-b2c7-fa72d74704ef-kube-api-access-k5krl\") pod \"redhat-marketplace-4nf68\" (UID: \"88ba3bd3-095c-4d75-b2c7-fa72d74704ef\") " pod="openshift-marketplace/redhat-marketplace-4nf68" Jan 31 03:54:35 crc kubenswrapper[4667]: I0131 03:54:35.158399 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88ba3bd3-095c-4d75-b2c7-fa72d74704ef-catalog-content\") pod \"redhat-marketplace-4nf68\" (UID: \"88ba3bd3-095c-4d75-b2c7-fa72d74704ef\") " pod="openshift-marketplace/redhat-marketplace-4nf68" Jan 31 03:54:35 crc kubenswrapper[4667]: I0131 03:54:35.158433 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88ba3bd3-095c-4d75-b2c7-fa72d74704ef-utilities\") pod \"redhat-marketplace-4nf68\" (UID: \"88ba3bd3-095c-4d75-b2c7-fa72d74704ef\") " pod="openshift-marketplace/redhat-marketplace-4nf68" Jan 31 03:54:35 crc kubenswrapper[4667]: I0131 03:54:35.159088 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88ba3bd3-095c-4d75-b2c7-fa72d74704ef-utilities\") pod \"redhat-marketplace-4nf68\" (UID: \"88ba3bd3-095c-4d75-b2c7-fa72d74704ef\") " pod="openshift-marketplace/redhat-marketplace-4nf68" Jan 31 03:54:35 crc kubenswrapper[4667]: I0131 03:54:35.159487 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88ba3bd3-095c-4d75-b2c7-fa72d74704ef-catalog-content\") pod \"redhat-marketplace-4nf68\" (UID: \"88ba3bd3-095c-4d75-b2c7-fa72d74704ef\") " pod="openshift-marketplace/redhat-marketplace-4nf68" Jan 31 03:54:35 crc kubenswrapper[4667]: I0131 03:54:35.162380 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b72vl"] Jan 31 03:54:35 crc kubenswrapper[4667]: I0131 03:54:35.163754 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b72vl" Jan 31 03:54:35 crc kubenswrapper[4667]: I0131 03:54:35.165615 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 31 03:54:35 crc kubenswrapper[4667]: I0131 03:54:35.178107 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b72vl"] Jan 31 03:54:35 crc kubenswrapper[4667]: I0131 03:54:35.194657 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5krl\" (UniqueName: \"kubernetes.io/projected/88ba3bd3-095c-4d75-b2c7-fa72d74704ef-kube-api-access-k5krl\") pod \"redhat-marketplace-4nf68\" (UID: \"88ba3bd3-095c-4d75-b2c7-fa72d74704ef\") " pod="openshift-marketplace/redhat-marketplace-4nf68" Jan 31 03:54:35 crc kubenswrapper[4667]: I0131 03:54:35.259900 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c3d3bea-fee8-4619-8daf-bef3da273e55-utilities\") pod \"redhat-operators-b72vl\" (UID: \"7c3d3bea-fee8-4619-8daf-bef3da273e55\") " pod="openshift-marketplace/redhat-operators-b72vl" Jan 31 03:54:35 crc kubenswrapper[4667]: I0131 03:54:35.259981 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w44pt\" (UniqueName: \"kubernetes.io/projected/7c3d3bea-fee8-4619-8daf-bef3da273e55-kube-api-access-w44pt\") pod \"redhat-operators-b72vl\" (UID: \"7c3d3bea-fee8-4619-8daf-bef3da273e55\") " pod="openshift-marketplace/redhat-operators-b72vl" Jan 31 03:54:35 crc kubenswrapper[4667]: I0131 03:54:35.260023 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c3d3bea-fee8-4619-8daf-bef3da273e55-catalog-content\") pod \"redhat-operators-b72vl\" (UID: \"7c3d3bea-fee8-4619-8daf-bef3da273e55\") " pod="openshift-marketplace/redhat-operators-b72vl" Jan 31 03:54:35 crc kubenswrapper[4667]: I0131 03:54:35.286376 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4nf68" Jan 31 03:54:35 crc kubenswrapper[4667]: I0131 03:54:35.291035 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d010741-ba9c-43b5-9dc3-87cb17d353d2" path="/var/lib/kubelet/pods/6d010741-ba9c-43b5-9dc3-87cb17d353d2/volumes" Jan 31 03:54:35 crc kubenswrapper[4667]: I0131 03:54:35.291723 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fc82b44-ef8d-4f7c-a022-fcbed68b1fab" path="/var/lib/kubelet/pods/6fc82b44-ef8d-4f7c-a022-fcbed68b1fab/volumes" Jan 31 03:54:35 crc kubenswrapper[4667]: I0131 03:54:35.292339 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6" path="/var/lib/kubelet/pods/7f3f5631-08d7-40f6-8bd5-96ecc96bd2e6/volumes" Jan 31 03:54:35 crc kubenswrapper[4667]: I0131 03:54:35.293417 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6b3a151-c2e9-4461-92c3-b7752926f08c" path="/var/lib/kubelet/pods/b6b3a151-c2e9-4461-92c3-b7752926f08c/volumes" Jan 31 03:54:35 crc kubenswrapper[4667]: I0131 03:54:35.294131 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d426c096-b6d9-4696-8066-2b9ec75356af" path="/var/lib/kubelet/pods/d426c096-b6d9-4696-8066-2b9ec75356af/volumes" Jan 31 03:54:35 crc kubenswrapper[4667]: I0131 03:54:35.362411 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c3d3bea-fee8-4619-8daf-bef3da273e55-catalog-content\") pod \"redhat-operators-b72vl\" (UID: \"7c3d3bea-fee8-4619-8daf-bef3da273e55\") " pod="openshift-marketplace/redhat-operators-b72vl" Jan 31 03:54:35 crc kubenswrapper[4667]: I0131 03:54:35.363679 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c3d3bea-fee8-4619-8daf-bef3da273e55-utilities\") pod \"redhat-operators-b72vl\" (UID: \"7c3d3bea-fee8-4619-8daf-bef3da273e55\") " pod="openshift-marketplace/redhat-operators-b72vl" Jan 31 03:54:35 crc kubenswrapper[4667]: I0131 03:54:35.365760 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c3d3bea-fee8-4619-8daf-bef3da273e55-catalog-content\") pod \"redhat-operators-b72vl\" (UID: \"7c3d3bea-fee8-4619-8daf-bef3da273e55\") " pod="openshift-marketplace/redhat-operators-b72vl" Jan 31 03:54:35 crc kubenswrapper[4667]: I0131 03:54:35.365926 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c3d3bea-fee8-4619-8daf-bef3da273e55-utilities\") pod \"redhat-operators-b72vl\" (UID: \"7c3d3bea-fee8-4619-8daf-bef3da273e55\") " pod="openshift-marketplace/redhat-operators-b72vl" Jan 31 03:54:35 crc kubenswrapper[4667]: I0131 03:54:35.366812 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w44pt\" (UniqueName: \"kubernetes.io/projected/7c3d3bea-fee8-4619-8daf-bef3da273e55-kube-api-access-w44pt\") pod \"redhat-operators-b72vl\" (UID: \"7c3d3bea-fee8-4619-8daf-bef3da273e55\") " pod="openshift-marketplace/redhat-operators-b72vl" Jan 31 03:54:35 crc kubenswrapper[4667]: I0131 03:54:35.426008 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w44pt\" (UniqueName: \"kubernetes.io/projected/7c3d3bea-fee8-4619-8daf-bef3da273e55-kube-api-access-w44pt\") pod \"redhat-operators-b72vl\" (UID: \"7c3d3bea-fee8-4619-8daf-bef3da273e55\") " pod="openshift-marketplace/redhat-operators-b72vl" Jan 31 03:54:35 crc kubenswrapper[4667]: I0131 03:54:35.481259 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b72vl" Jan 31 03:54:35 crc kubenswrapper[4667]: I0131 03:54:35.756881 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4nf68"] Jan 31 03:54:35 crc kubenswrapper[4667]: W0131 03:54:35.765389 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88ba3bd3_095c_4d75_b2c7_fa72d74704ef.slice/crio-c64d0c82f8b887d0cb6a4052a6d921b7cfed2a129e128de2bf8a83baae4b7f59 WatchSource:0}: Error finding container c64d0c82f8b887d0cb6a4052a6d921b7cfed2a129e128de2bf8a83baae4b7f59: Status 404 returned error can't find the container with id c64d0c82f8b887d0cb6a4052a6d921b7cfed2a129e128de2bf8a83baae4b7f59 Jan 31 03:54:35 crc kubenswrapper[4667]: I0131 03:54:35.929666 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b72vl"] Jan 31 03:54:35 crc kubenswrapper[4667]: W0131 03:54:35.938906 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c3d3bea_fee8_4619_8daf_bef3da273e55.slice/crio-39c83b364f62e47a8fac50120f14eaa75bc1a5ec2e0cb7d6e63678e5eabd72f2 WatchSource:0}: Error finding container 39c83b364f62e47a8fac50120f14eaa75bc1a5ec2e0cb7d6e63678e5eabd72f2: Status 404 returned error can't find the container with id 39c83b364f62e47a8fac50120f14eaa75bc1a5ec2e0cb7d6e63678e5eabd72f2 Jan 31 03:54:35 crc kubenswrapper[4667]: I0131 03:54:35.960899 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b72vl" event={"ID":"7c3d3bea-fee8-4619-8daf-bef3da273e55","Type":"ContainerStarted","Data":"39c83b364f62e47a8fac50120f14eaa75bc1a5ec2e0cb7d6e63678e5eabd72f2"} Jan 31 03:54:35 crc kubenswrapper[4667]: I0131 03:54:35.963962 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4nf68" event={"ID":"88ba3bd3-095c-4d75-b2c7-fa72d74704ef","Type":"ContainerStarted","Data":"dc52f2db4350422963024ede3619adf5d5df9fbee6ddfdb69b140e6df99c6b1d"} Jan 31 03:54:35 crc kubenswrapper[4667]: I0131 03:54:35.964015 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4nf68" event={"ID":"88ba3bd3-095c-4d75-b2c7-fa72d74704ef","Type":"ContainerStarted","Data":"c64d0c82f8b887d0cb6a4052a6d921b7cfed2a129e128de2bf8a83baae4b7f59"} Jan 31 03:54:36 crc kubenswrapper[4667]: I0131 03:54:36.971227 4667 generic.go:334] "Generic (PLEG): container finished" podID="7c3d3bea-fee8-4619-8daf-bef3da273e55" containerID="84f9fcc72ae6545ee8efaa454fb3dd3d37d49b2c1cf23665aaebd4a39df699db" exitCode=0 Jan 31 03:54:36 crc kubenswrapper[4667]: I0131 03:54:36.971436 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b72vl" event={"ID":"7c3d3bea-fee8-4619-8daf-bef3da273e55","Type":"ContainerDied","Data":"84f9fcc72ae6545ee8efaa454fb3dd3d37d49b2c1cf23665aaebd4a39df699db"} Jan 31 03:54:36 crc kubenswrapper[4667]: I0131 03:54:36.974540 4667 generic.go:334] "Generic (PLEG): container finished" podID="88ba3bd3-095c-4d75-b2c7-fa72d74704ef" containerID="dc52f2db4350422963024ede3619adf5d5df9fbee6ddfdb69b140e6df99c6b1d" exitCode=0 Jan 31 03:54:36 crc kubenswrapper[4667]: I0131 03:54:36.974727 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4nf68" event={"ID":"88ba3bd3-095c-4d75-b2c7-fa72d74704ef","Type":"ContainerDied","Data":"dc52f2db4350422963024ede3619adf5d5df9fbee6ddfdb69b140e6df99c6b1d"} Jan 31 03:54:36 crc kubenswrapper[4667]: I0131 03:54:36.974763 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4nf68" event={"ID":"88ba3bd3-095c-4d75-b2c7-fa72d74704ef","Type":"ContainerStarted","Data":"950f32b8ecc6db6cbd6246207fffc66f7a2eaa26de579281c14e89b798a51e20"} Jan 31 03:54:37 crc kubenswrapper[4667]: I0131 03:54:37.360628 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4bdlg"] Jan 31 03:54:37 crc kubenswrapper[4667]: I0131 03:54:37.361785 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4bdlg" Jan 31 03:54:37 crc kubenswrapper[4667]: I0131 03:54:37.365185 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 31 03:54:37 crc kubenswrapper[4667]: I0131 03:54:37.378831 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4bdlg"] Jan 31 03:54:37 crc kubenswrapper[4667]: I0131 03:54:37.401046 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd7np\" (UniqueName: \"kubernetes.io/projected/0b24b83c-995f-4a6f-a567-0ce5c6cbd210-kube-api-access-gd7np\") pod \"certified-operators-4bdlg\" (UID: \"0b24b83c-995f-4a6f-a567-0ce5c6cbd210\") " pod="openshift-marketplace/certified-operators-4bdlg" Jan 31 03:54:37 crc kubenswrapper[4667]: I0131 03:54:37.401314 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b24b83c-995f-4a6f-a567-0ce5c6cbd210-catalog-content\") pod \"certified-operators-4bdlg\" (UID: \"0b24b83c-995f-4a6f-a567-0ce5c6cbd210\") " pod="openshift-marketplace/certified-operators-4bdlg" Jan 31 03:54:37 crc kubenswrapper[4667]: I0131 03:54:37.401374 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b24b83c-995f-4a6f-a567-0ce5c6cbd210-utilities\") pod \"certified-operators-4bdlg\" (UID: \"0b24b83c-995f-4a6f-a567-0ce5c6cbd210\") " pod="openshift-marketplace/certified-operators-4bdlg" Jan 31 03:54:37 crc kubenswrapper[4667]: I0131 03:54:37.502811 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd7np\" (UniqueName: \"kubernetes.io/projected/0b24b83c-995f-4a6f-a567-0ce5c6cbd210-kube-api-access-gd7np\") pod \"certified-operators-4bdlg\" (UID: \"0b24b83c-995f-4a6f-a567-0ce5c6cbd210\") " pod="openshift-marketplace/certified-operators-4bdlg" Jan 31 03:54:37 crc kubenswrapper[4667]: I0131 03:54:37.502923 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b24b83c-995f-4a6f-a567-0ce5c6cbd210-catalog-content\") pod \"certified-operators-4bdlg\" (UID: \"0b24b83c-995f-4a6f-a567-0ce5c6cbd210\") " pod="openshift-marketplace/certified-operators-4bdlg" Jan 31 03:54:37 crc kubenswrapper[4667]: I0131 03:54:37.502961 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b24b83c-995f-4a6f-a567-0ce5c6cbd210-utilities\") pod \"certified-operators-4bdlg\" (UID: \"0b24b83c-995f-4a6f-a567-0ce5c6cbd210\") " pod="openshift-marketplace/certified-operators-4bdlg" Jan 31 03:54:37 crc kubenswrapper[4667]: I0131 03:54:37.503468 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b24b83c-995f-4a6f-a567-0ce5c6cbd210-utilities\") pod \"certified-operators-4bdlg\" (UID: \"0b24b83c-995f-4a6f-a567-0ce5c6cbd210\") " pod="openshift-marketplace/certified-operators-4bdlg" Jan 31 03:54:37 crc kubenswrapper[4667]: I0131 03:54:37.503715 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b24b83c-995f-4a6f-a567-0ce5c6cbd210-catalog-content\") pod \"certified-operators-4bdlg\" (UID: \"0b24b83c-995f-4a6f-a567-0ce5c6cbd210\") " pod="openshift-marketplace/certified-operators-4bdlg" Jan 31 03:54:37 crc kubenswrapper[4667]: I0131 03:54:37.524359 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd7np\" (UniqueName: \"kubernetes.io/projected/0b24b83c-995f-4a6f-a567-0ce5c6cbd210-kube-api-access-gd7np\") pod \"certified-operators-4bdlg\" (UID: \"0b24b83c-995f-4a6f-a567-0ce5c6cbd210\") " pod="openshift-marketplace/certified-operators-4bdlg" Jan 31 03:54:37 crc kubenswrapper[4667]: I0131 03:54:37.563578 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qnlkn"] Jan 31 03:54:37 crc kubenswrapper[4667]: I0131 03:54:37.565076 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qnlkn" Jan 31 03:54:37 crc kubenswrapper[4667]: I0131 03:54:37.584520 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qnlkn"] Jan 31 03:54:37 crc kubenswrapper[4667]: I0131 03:54:37.585371 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 31 03:54:37 crc kubenswrapper[4667]: I0131 03:54:37.603653 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b321312d-8d4b-4547-98c2-e3226cfb5dc5-utilities\") pod \"community-operators-qnlkn\" (UID: \"b321312d-8d4b-4547-98c2-e3226cfb5dc5\") " pod="openshift-marketplace/community-operators-qnlkn" Jan 31 03:54:37 crc kubenswrapper[4667]: I0131 03:54:37.603704 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b321312d-8d4b-4547-98c2-e3226cfb5dc5-catalog-content\") pod \"community-operators-qnlkn\" (UID: \"b321312d-8d4b-4547-98c2-e3226cfb5dc5\") " pod="openshift-marketplace/community-operators-qnlkn" Jan 31 03:54:37 crc kubenswrapper[4667]: I0131 03:54:37.603734 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tz2f\" (UniqueName: \"kubernetes.io/projected/b321312d-8d4b-4547-98c2-e3226cfb5dc5-kube-api-access-6tz2f\") pod \"community-operators-qnlkn\" (UID: \"b321312d-8d4b-4547-98c2-e3226cfb5dc5\") " pod="openshift-marketplace/community-operators-qnlkn" Jan 31 03:54:37 crc kubenswrapper[4667]: I0131 03:54:37.683260 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4bdlg" Jan 31 03:54:37 crc kubenswrapper[4667]: I0131 03:54:37.705065 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tz2f\" (UniqueName: \"kubernetes.io/projected/b321312d-8d4b-4547-98c2-e3226cfb5dc5-kube-api-access-6tz2f\") pod \"community-operators-qnlkn\" (UID: \"b321312d-8d4b-4547-98c2-e3226cfb5dc5\") " pod="openshift-marketplace/community-operators-qnlkn" Jan 31 03:54:37 crc kubenswrapper[4667]: I0131 03:54:37.705188 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b321312d-8d4b-4547-98c2-e3226cfb5dc5-utilities\") pod \"community-operators-qnlkn\" (UID: \"b321312d-8d4b-4547-98c2-e3226cfb5dc5\") " pod="openshift-marketplace/community-operators-qnlkn" Jan 31 03:54:37 crc kubenswrapper[4667]: I0131 03:54:37.705228 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b321312d-8d4b-4547-98c2-e3226cfb5dc5-catalog-content\") pod \"community-operators-qnlkn\" (UID: \"b321312d-8d4b-4547-98c2-e3226cfb5dc5\") " pod="openshift-marketplace/community-operators-qnlkn" Jan 31 03:54:37 crc kubenswrapper[4667]: I0131 03:54:37.705823 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b321312d-8d4b-4547-98c2-e3226cfb5dc5-catalog-content\") pod \"community-operators-qnlkn\" (UID: \"b321312d-8d4b-4547-98c2-e3226cfb5dc5\") " pod="openshift-marketplace/community-operators-qnlkn" Jan 31 03:54:37 crc kubenswrapper[4667]: I0131 03:54:37.705914 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b321312d-8d4b-4547-98c2-e3226cfb5dc5-utilities\") pod \"community-operators-qnlkn\" (UID: \"b321312d-8d4b-4547-98c2-e3226cfb5dc5\") " pod="openshift-marketplace/community-operators-qnlkn" Jan 31 03:54:37 crc kubenswrapper[4667]: I0131 03:54:37.723934 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tz2f\" (UniqueName: \"kubernetes.io/projected/b321312d-8d4b-4547-98c2-e3226cfb5dc5-kube-api-access-6tz2f\") pod \"community-operators-qnlkn\" (UID: \"b321312d-8d4b-4547-98c2-e3226cfb5dc5\") " pod="openshift-marketplace/community-operators-qnlkn" Jan 31 03:54:37 crc kubenswrapper[4667]: I0131 03:54:37.889537 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qnlkn" Jan 31 03:54:37 crc kubenswrapper[4667]: I0131 03:54:37.998010 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b72vl" event={"ID":"7c3d3bea-fee8-4619-8daf-bef3da273e55","Type":"ContainerStarted","Data":"7236da3f70852e546799c653fb852df48dcc0fbcef799f90e6f9813c37caeef8"} Jan 31 03:54:38 crc kubenswrapper[4667]: I0131 03:54:38.016152 4667 generic.go:334] "Generic (PLEG): container finished" podID="88ba3bd3-095c-4d75-b2c7-fa72d74704ef" containerID="950f32b8ecc6db6cbd6246207fffc66f7a2eaa26de579281c14e89b798a51e20" exitCode=0 Jan 31 03:54:38 crc kubenswrapper[4667]: I0131 03:54:38.016193 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4nf68" event={"ID":"88ba3bd3-095c-4d75-b2c7-fa72d74704ef","Type":"ContainerDied","Data":"950f32b8ecc6db6cbd6246207fffc66f7a2eaa26de579281c14e89b798a51e20"} Jan 31 03:54:38 crc kubenswrapper[4667]: I0131 03:54:38.200227 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4bdlg"] Jan 31 03:54:38 crc kubenswrapper[4667]: I0131 03:54:38.335644 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qnlkn"] Jan 31 03:54:38 crc kubenswrapper[4667]: W0131 03:54:38.344553 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb321312d_8d4b_4547_98c2_e3226cfb5dc5.slice/crio-9530c9f709172fc1123b5f0e98a8d34c8cb795db4c372d52836e239d29868ea4 WatchSource:0}: Error finding container 9530c9f709172fc1123b5f0e98a8d34c8cb795db4c372d52836e239d29868ea4: Status 404 returned error can't find the container with id 9530c9f709172fc1123b5f0e98a8d34c8cb795db4c372d52836e239d29868ea4 Jan 31 03:54:39 crc kubenswrapper[4667]: I0131 03:54:39.023197 4667 generic.go:334] "Generic (PLEG): container finished" podID="0b24b83c-995f-4a6f-a567-0ce5c6cbd210" containerID="649f1de49e96cd20ae5ba92f05cceae24d632743e2eeb2817c084f0cd55484a3" exitCode=0 Jan 31 03:54:39 crc kubenswrapper[4667]: I0131 03:54:39.023276 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bdlg" event={"ID":"0b24b83c-995f-4a6f-a567-0ce5c6cbd210","Type":"ContainerDied","Data":"649f1de49e96cd20ae5ba92f05cceae24d632743e2eeb2817c084f0cd55484a3"} Jan 31 03:54:39 crc kubenswrapper[4667]: I0131 03:54:39.023747 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bdlg" event={"ID":"0b24b83c-995f-4a6f-a567-0ce5c6cbd210","Type":"ContainerStarted","Data":"717c07ffa6e0ff7a2821b1c25df1c3ba890d590da3a463e6ce4b96663c25d9fc"} Jan 31 03:54:39 crc kubenswrapper[4667]: I0131 03:54:39.026180 4667 generic.go:334] "Generic (PLEG): container finished" podID="7c3d3bea-fee8-4619-8daf-bef3da273e55" containerID="7236da3f70852e546799c653fb852df48dcc0fbcef799f90e6f9813c37caeef8" exitCode=0 Jan 31 03:54:39 crc kubenswrapper[4667]: I0131 03:54:39.026250 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b72vl" event={"ID":"7c3d3bea-fee8-4619-8daf-bef3da273e55","Type":"ContainerDied","Data":"7236da3f70852e546799c653fb852df48dcc0fbcef799f90e6f9813c37caeef8"} Jan 31 03:54:39 crc kubenswrapper[4667]: I0131 03:54:39.032543 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4nf68" event={"ID":"88ba3bd3-095c-4d75-b2c7-fa72d74704ef","Type":"ContainerStarted","Data":"9bdbe5475d2d1fc4d252c4a9c1481a91a8d47222ed02263e0297d0c028e4f6e8"} Jan 31 03:54:39 crc kubenswrapper[4667]: I0131 03:54:39.035373 4667 generic.go:334] "Generic (PLEG): container finished" podID="b321312d-8d4b-4547-98c2-e3226cfb5dc5" containerID="e794e05dfd03bef865db0b5d22d59ebd547a8724017f76464d8da5c5cedbd3d3" exitCode=0 Jan 31 03:54:39 crc kubenswrapper[4667]: I0131 03:54:39.035408 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qnlkn" event={"ID":"b321312d-8d4b-4547-98c2-e3226cfb5dc5","Type":"ContainerDied","Data":"e794e05dfd03bef865db0b5d22d59ebd547a8724017f76464d8da5c5cedbd3d3"} Jan 31 03:54:39 crc kubenswrapper[4667]: I0131 03:54:39.035446 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qnlkn" event={"ID":"b321312d-8d4b-4547-98c2-e3226cfb5dc5","Type":"ContainerStarted","Data":"9530c9f709172fc1123b5f0e98a8d34c8cb795db4c372d52836e239d29868ea4"} Jan 31 03:54:39 crc kubenswrapper[4667]: I0131 03:54:39.094526 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4nf68" podStartSLOduration=2.647932157 podStartE2EDuration="5.094510434s" podCreationTimestamp="2026-01-31 03:54:34 +0000 UTC" firstStartedPulling="2026-01-31 03:54:35.969857334 +0000 UTC m=+399.486192633" lastFinishedPulling="2026-01-31 03:54:38.416435611 +0000 UTC m=+401.932770910" observedRunningTime="2026-01-31 03:54:39.092821769 +0000 UTC m=+402.609157068" watchObservedRunningTime="2026-01-31 03:54:39.094510434 +0000 UTC m=+402.610845733" Jan 31 03:54:40 crc kubenswrapper[4667]: I0131 03:54:40.043951 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bdlg" event={"ID":"0b24b83c-995f-4a6f-a567-0ce5c6cbd210","Type":"ContainerStarted","Data":"642a802c9c34b25b12c5473dd8f538ed5f2c99bcde3e1d82cf6575c3179ab63e"} Jan 31 03:54:40 crc kubenswrapper[4667]: I0131 03:54:40.047980 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b72vl" event={"ID":"7c3d3bea-fee8-4619-8daf-bef3da273e55","Type":"ContainerStarted","Data":"e3de9084a9d1b642bf5b64b34013e40a15d6a4a7a890fe047be9b199c9a04312"} Jan 31 03:54:40 crc kubenswrapper[4667]: I0131 03:54:40.105495 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b72vl" podStartSLOduration=2.659694473 podStartE2EDuration="5.105470469s" podCreationTimestamp="2026-01-31 03:54:35 +0000 UTC" firstStartedPulling="2026-01-31 03:54:36.973144073 +0000 UTC m=+400.489479372" lastFinishedPulling="2026-01-31 03:54:39.418920069 +0000 UTC m=+402.935255368" observedRunningTime="2026-01-31 03:54:40.096704485 +0000 UTC m=+403.613039784" watchObservedRunningTime="2026-01-31 03:54:40.105470469 +0000 UTC m=+403.621805768" Jan 31 03:54:40 crc kubenswrapper[4667]: I0131 03:54:40.225787 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-hhq8d" Jan 31 03:54:40 crc kubenswrapper[4667]: I0131 03:54:40.303319 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w7g4m"] Jan 31 03:54:41 crc kubenswrapper[4667]: I0131 03:54:41.054642 4667 generic.go:334] "Generic (PLEG): container finished" podID="b321312d-8d4b-4547-98c2-e3226cfb5dc5" containerID="4a678e342a244d62c1227576ab32710d8968eea420f32b5200c2f58387d31ec4" exitCode=0 Jan 31 03:54:41 crc kubenswrapper[4667]: I0131 03:54:41.054728 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qnlkn" event={"ID":"b321312d-8d4b-4547-98c2-e3226cfb5dc5","Type":"ContainerDied","Data":"4a678e342a244d62c1227576ab32710d8968eea420f32b5200c2f58387d31ec4"} Jan 31 03:54:41 crc kubenswrapper[4667]: I0131 03:54:41.057803 4667 generic.go:334] "Generic (PLEG): container finished" podID="0b24b83c-995f-4a6f-a567-0ce5c6cbd210" containerID="642a802c9c34b25b12c5473dd8f538ed5f2c99bcde3e1d82cf6575c3179ab63e" exitCode=0 Jan 31 03:54:41 crc kubenswrapper[4667]: I0131 03:54:41.059883 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bdlg" event={"ID":"0b24b83c-995f-4a6f-a567-0ce5c6cbd210","Type":"ContainerDied","Data":"642a802c9c34b25b12c5473dd8f538ed5f2c99bcde3e1d82cf6575c3179ab63e"} Jan 31 03:54:42 crc kubenswrapper[4667]: I0131 03:54:42.067651 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bdlg" event={"ID":"0b24b83c-995f-4a6f-a567-0ce5c6cbd210","Type":"ContainerStarted","Data":"2892d343d0c49caf01bce5fc49b676e656fd01119ac4be0c7f57ab9009a2dbb0"} Jan 31 03:54:42 crc kubenswrapper[4667]: I0131 03:54:42.073693 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qnlkn" event={"ID":"b321312d-8d4b-4547-98c2-e3226cfb5dc5","Type":"ContainerStarted","Data":"08e48b066b8d429fbdc41325c3b130895b74d564229858c57f4d909a1cdd38b1"} Jan 31 03:54:42 crc kubenswrapper[4667]: I0131 03:54:42.091459 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4bdlg" podStartSLOduration=2.553438151 podStartE2EDuration="5.091433526s" podCreationTimestamp="2026-01-31 03:54:37 +0000 UTC" firstStartedPulling="2026-01-31 03:54:39.026122094 +0000 UTC m=+402.542457393" lastFinishedPulling="2026-01-31 03:54:41.564117469 +0000 UTC m=+405.080452768" observedRunningTime="2026-01-31 03:54:42.088711003 +0000 UTC m=+405.605046302" watchObservedRunningTime="2026-01-31 03:54:42.091433526 +0000 UTC m=+405.607768825" Jan 31 03:54:42 crc kubenswrapper[4667]: I0131 03:54:42.113626 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qnlkn" podStartSLOduration=2.683131872 podStartE2EDuration="5.113597999s" podCreationTimestamp="2026-01-31 03:54:37 +0000 UTC" firstStartedPulling="2026-01-31 03:54:39.03683141 +0000 UTC m=+402.553166709" lastFinishedPulling="2026-01-31 03:54:41.467297517 +0000 UTC m=+404.983632836" observedRunningTime="2026-01-31 03:54:42.113015694 +0000 UTC m=+405.629350993" watchObservedRunningTime="2026-01-31 03:54:42.113597999 +0000 UTC m=+405.629933298" Jan 31 03:54:45 crc kubenswrapper[4667]: I0131 03:54:45.289115 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4nf68" Jan 31 03:54:45 crc kubenswrapper[4667]: I0131 03:54:45.290039 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4nf68" Jan 31 03:54:45 crc kubenswrapper[4667]: I0131 03:54:45.333441 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4nf68" Jan 31 03:54:45 crc kubenswrapper[4667]: I0131 03:54:45.481668 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b72vl" Jan 31 03:54:45 crc kubenswrapper[4667]: I0131 03:54:45.481748 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b72vl" Jan 31 03:54:45 crc kubenswrapper[4667]: I0131 03:54:45.704567 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 03:54:45 crc kubenswrapper[4667]: I0131 03:54:45.704676 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 03:54:46 crc kubenswrapper[4667]: I0131 03:54:46.135411 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4nf68" Jan 31 03:54:46 crc kubenswrapper[4667]: I0131 03:54:46.521668 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b72vl" podUID="7c3d3bea-fee8-4619-8daf-bef3da273e55" containerName="registry-server" probeResult="failure" output=< Jan 31 03:54:46 crc kubenswrapper[4667]: timeout: failed to connect service ":50051" within 1s Jan 31 03:54:46 crc kubenswrapper[4667]: > Jan 31 03:54:47 crc kubenswrapper[4667]: I0131 03:54:47.683906 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4bdlg" Jan 31 03:54:47 crc kubenswrapper[4667]: I0131 03:54:47.684173 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4bdlg" Jan 31 03:54:47 crc kubenswrapper[4667]: I0131 03:54:47.732518 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4bdlg" Jan 31 03:54:47 crc kubenswrapper[4667]: I0131 03:54:47.890716 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qnlkn" Jan 31 03:54:47 crc kubenswrapper[4667]: I0131 03:54:47.890782 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qnlkn" Jan 31 03:54:47 crc kubenswrapper[4667]: I0131 03:54:47.937580 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qnlkn" Jan 31 03:54:48 crc kubenswrapper[4667]: I0131 03:54:48.185821 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qnlkn" Jan 31 03:54:48 crc kubenswrapper[4667]: I0131 03:54:48.186491 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4bdlg" Jan 31 03:54:55 crc kubenswrapper[4667]: I0131 03:54:55.523820 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b72vl" Jan 31 03:54:55 crc kubenswrapper[4667]: I0131 03:54:55.573609 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b72vl" Jan 31 03:55:05 crc kubenswrapper[4667]: I0131 03:55:05.361349 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" podUID="5c9ccc10-6c02-463f-b2fd-a89fcacdb598" containerName="registry" containerID="cri-o://7bf7bf59f185b8e33e003d891a86c857db863f0d9832f6349e366d20b72087c7" gracePeriod=30 Jan 31 03:55:05 crc kubenswrapper[4667]: I0131 03:55:05.857171 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:55:05 crc kubenswrapper[4667]: I0131 03:55:05.946930 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5c9ccc10-6c02-463f-b2fd-a89fcacdb598-installation-pull-secrets\") pod \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " Jan 31 03:55:05 crc kubenswrapper[4667]: I0131 03:55:05.947083 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5c9ccc10-6c02-463f-b2fd-a89fcacdb598-bound-sa-token\") pod \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " Jan 31 03:55:05 crc kubenswrapper[4667]: I0131 03:55:05.947116 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5c9ccc10-6c02-463f-b2fd-a89fcacdb598-ca-trust-extracted\") pod \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " Jan 31 03:55:05 crc kubenswrapper[4667]: I0131 03:55:05.947209 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5c9ccc10-6c02-463f-b2fd-a89fcacdb598-registry-certificates\") pod \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " Jan 31 03:55:05 crc kubenswrapper[4667]: I0131 03:55:05.947238 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5c9ccc10-6c02-463f-b2fd-a89fcacdb598-registry-tls\") pod \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " Jan 31 03:55:05 crc kubenswrapper[4667]: I0131 03:55:05.947269 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5c9ccc10-6c02-463f-b2fd-a89fcacdb598-trusted-ca\") pod \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " Jan 31 03:55:05 crc kubenswrapper[4667]: I0131 03:55:05.947473 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " Jan 31 03:55:05 crc kubenswrapper[4667]: I0131 03:55:05.947506 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsnvh\" (UniqueName: \"kubernetes.io/projected/5c9ccc10-6c02-463f-b2fd-a89fcacdb598-kube-api-access-gsnvh\") pod \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " Jan 31 03:55:05 crc kubenswrapper[4667]: I0131 03:55:05.948177 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c9ccc10-6c02-463f-b2fd-a89fcacdb598-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "5c9ccc10-6c02-463f-b2fd-a89fcacdb598" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:55:05 crc kubenswrapper[4667]: I0131 03:55:05.949076 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c9ccc10-6c02-463f-b2fd-a89fcacdb598-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "5c9ccc10-6c02-463f-b2fd-a89fcacdb598" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:55:05 crc kubenswrapper[4667]: I0131 03:55:05.972600 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c9ccc10-6c02-463f-b2fd-a89fcacdb598-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "5c9ccc10-6c02-463f-b2fd-a89fcacdb598" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:55:05 crc kubenswrapper[4667]: E0131 03:55:05.972709 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:5c9ccc10-6c02-463f-b2fd-a89fcacdb598 nodeName:}" failed. No retries permitted until 2026-01-31 03:55:06.472693309 +0000 UTC m=+429.989028608 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "registry-storage" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "5c9ccc10-6c02-463f-b2fd-a89fcacdb598" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598") : kubernetes.io/csi: Unmounter.TearDownAt failed: rpc error: code = Unknown desc = check target path: could not get consistent content of /proc/mounts after 3 attempts Jan 31 03:55:05 crc kubenswrapper[4667]: I0131 03:55:05.973581 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c9ccc10-6c02-463f-b2fd-a89fcacdb598-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "5c9ccc10-6c02-463f-b2fd-a89fcacdb598" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:55:05 crc kubenswrapper[4667]: I0131 03:55:05.973644 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c9ccc10-6c02-463f-b2fd-a89fcacdb598-kube-api-access-gsnvh" (OuterVolumeSpecName: "kube-api-access-gsnvh") pod "5c9ccc10-6c02-463f-b2fd-a89fcacdb598" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598"). InnerVolumeSpecName "kube-api-access-gsnvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:55:05 crc kubenswrapper[4667]: I0131 03:55:05.973999 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c9ccc10-6c02-463f-b2fd-a89fcacdb598-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "5c9ccc10-6c02-463f-b2fd-a89fcacdb598" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:55:05 crc kubenswrapper[4667]: I0131 03:55:05.981360 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c9ccc10-6c02-463f-b2fd-a89fcacdb598-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "5c9ccc10-6c02-463f-b2fd-a89fcacdb598" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 03:55:06 crc kubenswrapper[4667]: I0131 03:55:06.049901 4667 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5c9ccc10-6c02-463f-b2fd-a89fcacdb598-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 03:55:06 crc kubenswrapper[4667]: I0131 03:55:06.050294 4667 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5c9ccc10-6c02-463f-b2fd-a89fcacdb598-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 31 03:55:06 crc kubenswrapper[4667]: I0131 03:55:06.050515 4667 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5c9ccc10-6c02-463f-b2fd-a89fcacdb598-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 31 03:55:06 crc kubenswrapper[4667]: I0131 03:55:06.050636 4667 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5c9ccc10-6c02-463f-b2fd-a89fcacdb598-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 31 03:55:06 crc kubenswrapper[4667]: I0131 03:55:06.050766 4667 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5c9ccc10-6c02-463f-b2fd-a89fcacdb598-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 03:55:06 crc kubenswrapper[4667]: I0131 03:55:06.051593 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsnvh\" (UniqueName: \"kubernetes.io/projected/5c9ccc10-6c02-463f-b2fd-a89fcacdb598-kube-api-access-gsnvh\") on node \"crc\" DevicePath \"\"" Jan 31 03:55:06 crc kubenswrapper[4667]: I0131 03:55:06.051881 4667 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5c9ccc10-6c02-463f-b2fd-a89fcacdb598-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 31 03:55:06 crc kubenswrapper[4667]: I0131 03:55:06.236538 4667 generic.go:334] "Generic (PLEG): container finished" podID="5c9ccc10-6c02-463f-b2fd-a89fcacdb598" containerID="7bf7bf59f185b8e33e003d891a86c857db863f0d9832f6349e366d20b72087c7" exitCode=0 Jan 31 03:55:06 crc kubenswrapper[4667]: I0131 03:55:06.236622 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" event={"ID":"5c9ccc10-6c02-463f-b2fd-a89fcacdb598","Type":"ContainerDied","Data":"7bf7bf59f185b8e33e003d891a86c857db863f0d9832f6349e366d20b72087c7"} Jan 31 03:55:06 crc kubenswrapper[4667]: I0131 03:55:06.236678 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" event={"ID":"5c9ccc10-6c02-463f-b2fd-a89fcacdb598","Type":"ContainerDied","Data":"09e53d7f2ec4bafd8e57f3301ae6ca977607bbffbcb114ae89ecce4aeb868034"} Jan 31 03:55:06 crc kubenswrapper[4667]: I0131 03:55:06.236715 4667 scope.go:117] "RemoveContainer" containerID="7bf7bf59f185b8e33e003d891a86c857db863f0d9832f6349e366d20b72087c7" Jan 31 03:55:06 crc kubenswrapper[4667]: I0131 03:55:06.236728 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-w7g4m" Jan 31 03:55:06 crc kubenswrapper[4667]: I0131 03:55:06.270666 4667 scope.go:117] "RemoveContainer" containerID="7bf7bf59f185b8e33e003d891a86c857db863f0d9832f6349e366d20b72087c7" Jan 31 03:55:06 crc kubenswrapper[4667]: E0131 03:55:06.272313 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bf7bf59f185b8e33e003d891a86c857db863f0d9832f6349e366d20b72087c7\": container with ID starting with 7bf7bf59f185b8e33e003d891a86c857db863f0d9832f6349e366d20b72087c7 not found: ID does not exist" containerID="7bf7bf59f185b8e33e003d891a86c857db863f0d9832f6349e366d20b72087c7" Jan 31 03:55:06 crc kubenswrapper[4667]: I0131 03:55:06.272424 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bf7bf59f185b8e33e003d891a86c857db863f0d9832f6349e366d20b72087c7"} err="failed to get container status \"7bf7bf59f185b8e33e003d891a86c857db863f0d9832f6349e366d20b72087c7\": rpc error: code = NotFound desc = could not find container \"7bf7bf59f185b8e33e003d891a86c857db863f0d9832f6349e366d20b72087c7\": container with ID starting with 7bf7bf59f185b8e33e003d891a86c857db863f0d9832f6349e366d20b72087c7 not found: ID does not exist" Jan 31 03:55:06 crc kubenswrapper[4667]: I0131 03:55:06.560170 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\" (UID: \"5c9ccc10-6c02-463f-b2fd-a89fcacdb598\") " Jan 31 03:55:06 crc kubenswrapper[4667]: I0131 03:55:06.583237 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "5c9ccc10-6c02-463f-b2fd-a89fcacdb598" (UID: "5c9ccc10-6c02-463f-b2fd-a89fcacdb598"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 31 03:55:06 crc kubenswrapper[4667]: I0131 03:55:06.686470 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w7g4m"] Jan 31 03:55:06 crc kubenswrapper[4667]: I0131 03:55:06.692686 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w7g4m"] Jan 31 03:55:07 crc kubenswrapper[4667]: I0131 03:55:07.294196 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c9ccc10-6c02-463f-b2fd-a89fcacdb598" path="/var/lib/kubelet/pods/5c9ccc10-6c02-463f-b2fd-a89fcacdb598/volumes" Jan 31 03:55:15 crc kubenswrapper[4667]: I0131 03:55:15.704764 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 03:55:15 crc kubenswrapper[4667]: I0131 03:55:15.705723 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 03:55:15 crc kubenswrapper[4667]: I0131 03:55:15.705811 4667 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" Jan 31 03:55:15 crc kubenswrapper[4667]: I0131 03:55:15.707048 4667 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7b687bdeec0b8354da9e648ddc07789b2d329a717764df615911c6a3b3a6768e"} pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 03:55:15 crc kubenswrapper[4667]: I0131 03:55:15.707158 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" containerID="cri-o://7b687bdeec0b8354da9e648ddc07789b2d329a717764df615911c6a3b3a6768e" gracePeriod=600 Jan 31 03:55:16 crc kubenswrapper[4667]: I0131 03:55:16.355828 4667 generic.go:334] "Generic (PLEG): container finished" podID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerID="7b687bdeec0b8354da9e648ddc07789b2d329a717764df615911c6a3b3a6768e" exitCode=0 Jan 31 03:55:16 crc kubenswrapper[4667]: I0131 03:55:16.355940 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" event={"ID":"b103bbd2-fb5d-4b2a-8b01-c32f699757df","Type":"ContainerDied","Data":"7b687bdeec0b8354da9e648ddc07789b2d329a717764df615911c6a3b3a6768e"} Jan 31 03:55:16 crc kubenswrapper[4667]: I0131 03:55:16.356455 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" event={"ID":"b103bbd2-fb5d-4b2a-8b01-c32f699757df","Type":"ContainerStarted","Data":"51d7a751b57a412d9d741ee969c521abf7aeca931e7ee615449f180a3fa0af59"} Jan 31 03:55:16 crc kubenswrapper[4667]: I0131 03:55:16.356507 4667 scope.go:117] "RemoveContainer" containerID="298f76d02f4ede118feca9fc2d4c9c073e2331174dcf673208ed96478b74232d" Jan 31 03:57:45 crc kubenswrapper[4667]: I0131 03:57:45.704967 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 03:57:45 crc kubenswrapper[4667]: I0131 03:57:45.706066 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 03:58:15 crc kubenswrapper[4667]: I0131 03:58:15.705333 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 03:58:15 crc kubenswrapper[4667]: I0131 03:58:15.706289 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 03:58:45 crc kubenswrapper[4667]: I0131 03:58:45.704901 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 03:58:45 crc kubenswrapper[4667]: I0131 03:58:45.705540 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 03:58:45 crc kubenswrapper[4667]: I0131 03:58:45.705601 4667 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" Jan 31 03:58:45 crc kubenswrapper[4667]: I0131 03:58:45.706529 4667 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"51d7a751b57a412d9d741ee969c521abf7aeca931e7ee615449f180a3fa0af59"} pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 03:58:45 crc kubenswrapper[4667]: I0131 03:58:45.706628 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" containerID="cri-o://51d7a751b57a412d9d741ee969c521abf7aeca931e7ee615449f180a3fa0af59" gracePeriod=600 Jan 31 03:58:45 crc kubenswrapper[4667]: E0131 03:58:45.823937 4667 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb103bbd2_fb5d_4b2a_8b01_c32f699757df.slice/crio-51d7a751b57a412d9d741ee969c521abf7aeca931e7ee615449f180a3fa0af59.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb103bbd2_fb5d_4b2a_8b01_c32f699757df.slice/crio-conmon-51d7a751b57a412d9d741ee969c521abf7aeca931e7ee615449f180a3fa0af59.scope\": RecentStats: unable to find data in memory cache]" Jan 31 03:58:45 crc kubenswrapper[4667]: I0131 03:58:45.968602 4667 generic.go:334] "Generic (PLEG): container finished" podID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerID="51d7a751b57a412d9d741ee969c521abf7aeca931e7ee615449f180a3fa0af59" exitCode=0 Jan 31 03:58:45 crc kubenswrapper[4667]: I0131 03:58:45.968675 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" event={"ID":"b103bbd2-fb5d-4b2a-8b01-c32f699757df","Type":"ContainerDied","Data":"51d7a751b57a412d9d741ee969c521abf7aeca931e7ee615449f180a3fa0af59"} Jan 31 03:58:45 crc kubenswrapper[4667]: I0131 03:58:45.969552 4667 scope.go:117] "RemoveContainer" containerID="7b687bdeec0b8354da9e648ddc07789b2d329a717764df615911c6a3b3a6768e" Jan 31 03:58:46 crc kubenswrapper[4667]: I0131 03:58:46.979355 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" event={"ID":"b103bbd2-fb5d-4b2a-8b01-c32f699757df","Type":"ContainerStarted","Data":"1e53b0068c5af26480719e1ae76b8eb2cdae9fcbfa4d0840e77aebecf0501325"} Jan 31 03:59:11 crc kubenswrapper[4667]: I0131 03:59:11.188415 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-fhjxn"] Jan 31 03:59:11 crc kubenswrapper[4667]: E0131 03:59:11.189256 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c9ccc10-6c02-463f-b2fd-a89fcacdb598" containerName="registry" Jan 31 03:59:11 crc kubenswrapper[4667]: I0131 03:59:11.189273 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c9ccc10-6c02-463f-b2fd-a89fcacdb598" containerName="registry" Jan 31 03:59:11 crc kubenswrapper[4667]: I0131 03:59:11.189388 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c9ccc10-6c02-463f-b2fd-a89fcacdb598" containerName="registry" Jan 31 03:59:11 crc kubenswrapper[4667]: I0131 03:59:11.189809 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-fhjxn" Jan 31 03:59:11 crc kubenswrapper[4667]: I0131 03:59:11.196187 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-z8c84"] Jan 31 03:59:11 crc kubenswrapper[4667]: I0131 03:59:11.196978 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-z8c84" Jan 31 03:59:11 crc kubenswrapper[4667]: I0131 03:59:11.202681 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 31 03:59:11 crc kubenswrapper[4667]: I0131 03:59:11.204021 4667 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-42jjd" Jan 31 03:59:11 crc kubenswrapper[4667]: I0131 03:59:11.204411 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 31 03:59:11 crc kubenswrapper[4667]: I0131 03:59:11.211667 4667 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-2tczz" Jan 31 03:59:11 crc kubenswrapper[4667]: I0131 03:59:11.218445 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-2vkbp"] Jan 31 03:59:11 crc kubenswrapper[4667]: I0131 03:59:11.219334 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-2vkbp" Jan 31 03:59:11 crc kubenswrapper[4667]: I0131 03:59:11.238433 4667 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-grc2d" Jan 31 03:59:11 crc kubenswrapper[4667]: I0131 03:59:11.240967 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-fhjxn"] Jan 31 03:59:11 crc kubenswrapper[4667]: I0131 03:59:11.244398 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-z8c84"] Jan 31 03:59:11 crc kubenswrapper[4667]: I0131 03:59:11.254480 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-2vkbp"] Jan 31 03:59:11 crc kubenswrapper[4667]: I0131 03:59:11.303994 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26gvm\" (UniqueName: \"kubernetes.io/projected/3a2e920c-f3b6-4c7d-aeae-f8d88ce0a3b1-kube-api-access-26gvm\") pod \"cert-manager-webhook-687f57d79b-2vkbp\" (UID: \"3a2e920c-f3b6-4c7d-aeae-f8d88ce0a3b1\") " pod="cert-manager/cert-manager-webhook-687f57d79b-2vkbp" Jan 31 03:59:11 crc kubenswrapper[4667]: I0131 03:59:11.304041 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt4qw\" (UniqueName: \"kubernetes.io/projected/06350efe-2c60-4ce9-a58d-034636cc57db-kube-api-access-qt4qw\") pod \"cert-manager-cainjector-cf98fcc89-z8c84\" (UID: \"06350efe-2c60-4ce9-a58d-034636cc57db\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-z8c84" Jan 31 03:59:11 crc kubenswrapper[4667]: I0131 03:59:11.304108 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw5hl\" (UniqueName: \"kubernetes.io/projected/79f310bb-9fe3-4e37-9c80-b5c218823271-kube-api-access-kw5hl\") pod \"cert-manager-858654f9db-fhjxn\" (UID: \"79f310bb-9fe3-4e37-9c80-b5c218823271\") " pod="cert-manager/cert-manager-858654f9db-fhjxn" Jan 31 03:59:11 crc kubenswrapper[4667]: I0131 03:59:11.405787 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw5hl\" (UniqueName: \"kubernetes.io/projected/79f310bb-9fe3-4e37-9c80-b5c218823271-kube-api-access-kw5hl\") pod \"cert-manager-858654f9db-fhjxn\" (UID: \"79f310bb-9fe3-4e37-9c80-b5c218823271\") " pod="cert-manager/cert-manager-858654f9db-fhjxn" Jan 31 03:59:11 crc kubenswrapper[4667]: I0131 03:59:11.405934 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26gvm\" (UniqueName: \"kubernetes.io/projected/3a2e920c-f3b6-4c7d-aeae-f8d88ce0a3b1-kube-api-access-26gvm\") pod \"cert-manager-webhook-687f57d79b-2vkbp\" (UID: \"3a2e920c-f3b6-4c7d-aeae-f8d88ce0a3b1\") " pod="cert-manager/cert-manager-webhook-687f57d79b-2vkbp" Jan 31 03:59:11 crc kubenswrapper[4667]: I0131 03:59:11.406158 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt4qw\" (UniqueName: \"kubernetes.io/projected/06350efe-2c60-4ce9-a58d-034636cc57db-kube-api-access-qt4qw\") pod \"cert-manager-cainjector-cf98fcc89-z8c84\" (UID: \"06350efe-2c60-4ce9-a58d-034636cc57db\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-z8c84" Jan 31 03:59:11 crc kubenswrapper[4667]: I0131 03:59:11.431013 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26gvm\" (UniqueName: \"kubernetes.io/projected/3a2e920c-f3b6-4c7d-aeae-f8d88ce0a3b1-kube-api-access-26gvm\") pod \"cert-manager-webhook-687f57d79b-2vkbp\" (UID: \"3a2e920c-f3b6-4c7d-aeae-f8d88ce0a3b1\") " pod="cert-manager/cert-manager-webhook-687f57d79b-2vkbp" Jan 31 03:59:11 crc kubenswrapper[4667]: I0131 03:59:11.431251 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt4qw\" (UniqueName: \"kubernetes.io/projected/06350efe-2c60-4ce9-a58d-034636cc57db-kube-api-access-qt4qw\") pod \"cert-manager-cainjector-cf98fcc89-z8c84\" (UID: \"06350efe-2c60-4ce9-a58d-034636cc57db\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-z8c84" Jan 31 03:59:11 crc kubenswrapper[4667]: I0131 03:59:11.433102 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw5hl\" (UniqueName: \"kubernetes.io/projected/79f310bb-9fe3-4e37-9c80-b5c218823271-kube-api-access-kw5hl\") pod \"cert-manager-858654f9db-fhjxn\" (UID: \"79f310bb-9fe3-4e37-9c80-b5c218823271\") " pod="cert-manager/cert-manager-858654f9db-fhjxn" Jan 31 03:59:11 crc kubenswrapper[4667]: I0131 03:59:11.512169 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-fhjxn" Jan 31 03:59:11 crc kubenswrapper[4667]: I0131 03:59:11.520686 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-z8c84" Jan 31 03:59:11 crc kubenswrapper[4667]: I0131 03:59:11.533436 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-2vkbp" Jan 31 03:59:11 crc kubenswrapper[4667]: I0131 03:59:11.781615 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-fhjxn"] Jan 31 03:59:11 crc kubenswrapper[4667]: I0131 03:59:11.797927 4667 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 03:59:11 crc kubenswrapper[4667]: I0131 03:59:11.851517 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-z8c84"] Jan 31 03:59:11 crc kubenswrapper[4667]: W0131 03:59:11.858934 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06350efe_2c60_4ce9_a58d_034636cc57db.slice/crio-e148db13004c98d3c93e92cf02984e810bf7b8128473e12151bbedd81e712d25 WatchSource:0}: Error finding container e148db13004c98d3c93e92cf02984e810bf7b8128473e12151bbedd81e712d25: Status 404 returned error can't find the container with id e148db13004c98d3c93e92cf02984e810bf7b8128473e12151bbedd81e712d25 Jan 31 03:59:12 crc kubenswrapper[4667]: I0131 03:59:12.060800 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-2vkbp"] Jan 31 03:59:12 crc kubenswrapper[4667]: W0131 03:59:12.064096 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a2e920c_f3b6_4c7d_aeae_f8d88ce0a3b1.slice/crio-3760b92bb07124562b801e03ee34dfa8ba2288a2392788550f2176ad3cc63d81 WatchSource:0}: Error finding container 3760b92bb07124562b801e03ee34dfa8ba2288a2392788550f2176ad3cc63d81: Status 404 returned error can't find the container with id 3760b92bb07124562b801e03ee34dfa8ba2288a2392788550f2176ad3cc63d81 Jan 31 03:59:12 crc kubenswrapper[4667]: I0131 03:59:12.163455 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-fhjxn" event={"ID":"79f310bb-9fe3-4e37-9c80-b5c218823271","Type":"ContainerStarted","Data":"73ae00d404c0edc62c3b7dfa9472239bc6b043eeb3c735b79f2e3bc6dd0501e9"} Jan 31 03:59:12 crc kubenswrapper[4667]: I0131 03:59:12.165175 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-z8c84" event={"ID":"06350efe-2c60-4ce9-a58d-034636cc57db","Type":"ContainerStarted","Data":"e148db13004c98d3c93e92cf02984e810bf7b8128473e12151bbedd81e712d25"} Jan 31 03:59:12 crc kubenswrapper[4667]: I0131 03:59:12.166946 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-2vkbp" event={"ID":"3a2e920c-f3b6-4c7d-aeae-f8d88ce0a3b1","Type":"ContainerStarted","Data":"3760b92bb07124562b801e03ee34dfa8ba2288a2392788550f2176ad3cc63d81"} Jan 31 03:59:16 crc kubenswrapper[4667]: I0131 03:59:16.203233 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-2vkbp" event={"ID":"3a2e920c-f3b6-4c7d-aeae-f8d88ce0a3b1","Type":"ContainerStarted","Data":"87c82bdb564fa0fb598bdd806a6e97628b6fa7115e6e0708b9a351ab6bc9e607"} Jan 31 03:59:16 crc kubenswrapper[4667]: I0131 03:59:16.203969 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-2vkbp" Jan 31 03:59:16 crc kubenswrapper[4667]: I0131 03:59:16.207474 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-fhjxn" event={"ID":"79f310bb-9fe3-4e37-9c80-b5c218823271","Type":"ContainerStarted","Data":"1159b75355f0fc893536eb7d68d6b11ae1f2c1eaa6377177c648e4262add3190"} Jan 31 03:59:16 crc kubenswrapper[4667]: I0131 03:59:16.209388 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-z8c84" event={"ID":"06350efe-2c60-4ce9-a58d-034636cc57db","Type":"ContainerStarted","Data":"fd5155cbe558f178c9374fbdf5249cfcb2440b755f124fa5181a37e0550f9062"} Jan 31 03:59:16 crc kubenswrapper[4667]: I0131 03:59:16.242065 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-fhjxn" podStartSLOduration=1.149534581 podStartE2EDuration="5.24204551s" podCreationTimestamp="2026-01-31 03:59:11 +0000 UTC" firstStartedPulling="2026-01-31 03:59:11.795137452 +0000 UTC m=+675.311472751" lastFinishedPulling="2026-01-31 03:59:15.887648381 +0000 UTC m=+679.403983680" observedRunningTime="2026-01-31 03:59:16.241260609 +0000 UTC m=+679.757595908" watchObservedRunningTime="2026-01-31 03:59:16.24204551 +0000 UTC m=+679.758380809" Jan 31 03:59:16 crc kubenswrapper[4667]: I0131 03:59:16.243078 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-2vkbp" podStartSLOduration=1.492368097 podStartE2EDuration="5.243073477s" podCreationTimestamp="2026-01-31 03:59:11 +0000 UTC" firstStartedPulling="2026-01-31 03:59:12.066901717 +0000 UTC m=+675.583237026" lastFinishedPulling="2026-01-31 03:59:15.817607117 +0000 UTC m=+679.333942406" observedRunningTime="2026-01-31 03:59:16.225152418 +0000 UTC m=+679.741487737" watchObservedRunningTime="2026-01-31 03:59:16.243073477 +0000 UTC m=+679.759408776" Jan 31 03:59:16 crc kubenswrapper[4667]: I0131 03:59:16.271225 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-z8c84" podStartSLOduration=1.324017119 podStartE2EDuration="5.271206753s" podCreationTimestamp="2026-01-31 03:59:11 +0000 UTC" firstStartedPulling="2026-01-31 03:59:11.862296951 +0000 UTC m=+675.378632250" lastFinishedPulling="2026-01-31 03:59:15.809486575 +0000 UTC m=+679.325821884" observedRunningTime="2026-01-31 03:59:16.269566471 +0000 UTC m=+679.785901780" watchObservedRunningTime="2026-01-31 03:59:16.271206753 +0000 UTC m=+679.787542052" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.001537 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jhj5n"] Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.003057 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="northd" containerID="cri-o://0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e" gracePeriod=30 Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.003360 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91" gracePeriod=30 Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.003421 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="sbdb" containerID="cri-o://c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d" gracePeriod=30 Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.003488 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="nbdb" containerID="cri-o://0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce" gracePeriod=30 Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.003529 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="ovn-controller" containerID="cri-o://332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc" gracePeriod=30 Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.003514 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="ovn-acl-logging" containerID="cri-o://e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3" gracePeriod=30 Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.003637 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="kube-rbac-proxy-node" containerID="cri-o://70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9" gracePeriod=30 Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.052142 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="ovnkube-controller" containerID="cri-o://506293b7d928fe97c6b77d9109ec52621924d8d435d257363cf0fbd2e4b95a1b" gracePeriod=30 Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.238416 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cd764_b069c8d1-f785-4509-8ee6-7d44525bdc89/kube-multus/2.log" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.239308 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cd764_b069c8d1-f785-4509-8ee6-7d44525bdc89/kube-multus/1.log" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.239339 4667 generic.go:334] "Generic (PLEG): container finished" podID="b069c8d1-f785-4509-8ee6-7d44525bdc89" containerID="9984a610f48d7ddbc022492b34bc1a1bd85aab975477a59f5f05018d5841f13a" exitCode=2 Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.239392 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cd764" event={"ID":"b069c8d1-f785-4509-8ee6-7d44525bdc89","Type":"ContainerDied","Data":"9984a610f48d7ddbc022492b34bc1a1bd85aab975477a59f5f05018d5841f13a"} Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.239434 4667 scope.go:117] "RemoveContainer" containerID="370b5296f121631f739cdba4f61f648a9f00aec73518549365ffd970bea8db8d" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.239963 4667 scope.go:117] "RemoveContainer" containerID="9984a610f48d7ddbc022492b34bc1a1bd85aab975477a59f5f05018d5841f13a" Jan 31 03:59:21 crc kubenswrapper[4667]: E0131 03:59:21.240219 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-cd764_openshift-multus(b069c8d1-f785-4509-8ee6-7d44525bdc89)\"" pod="openshift-multus/multus-cd764" podUID="b069c8d1-f785-4509-8ee6-7d44525bdc89" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.247514 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jhj5n_3d685ba5-5ff5-4e74-8d02-99a233fc6c9b/ovnkube-controller/3.log" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.249462 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jhj5n_3d685ba5-5ff5-4e74-8d02-99a233fc6c9b/ovn-acl-logging/0.log" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.249884 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jhj5n_3d685ba5-5ff5-4e74-8d02-99a233fc6c9b/ovn-controller/0.log" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.250204 4667 generic.go:334] "Generic (PLEG): container finished" podID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerID="0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91" exitCode=0 Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.250226 4667 generic.go:334] "Generic (PLEG): container finished" podID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerID="70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9" exitCode=0 Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.250236 4667 generic.go:334] "Generic (PLEG): container finished" podID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerID="e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3" exitCode=143 Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.250247 4667 generic.go:334] "Generic (PLEG): container finished" podID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerID="332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc" exitCode=143 Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.250270 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" event={"ID":"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b","Type":"ContainerDied","Data":"0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91"} Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.250297 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" event={"ID":"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b","Type":"ContainerDied","Data":"70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9"} Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.250307 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" event={"ID":"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b","Type":"ContainerDied","Data":"e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3"} Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.250315 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" event={"ID":"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b","Type":"ContainerDied","Data":"332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc"} Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.536795 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-2vkbp" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.817368 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jhj5n_3d685ba5-5ff5-4e74-8d02-99a233fc6c9b/ovnkube-controller/3.log" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.820074 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jhj5n_3d685ba5-5ff5-4e74-8d02-99a233fc6c9b/ovn-acl-logging/0.log" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.820658 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jhj5n_3d685ba5-5ff5-4e74-8d02-99a233fc6c9b/ovn-controller/0.log" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.821301 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.874203 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-ovnkube-config\") pod \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.874263 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-env-overrides\") pod \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.874310 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-host-kubelet\") pod \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.874343 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.874372 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-ovn-node-metrics-cert\") pod \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.874407 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-etc-openvswitch\") pod \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.874440 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-host-cni-bin\") pod \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.874488 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-host-cni-netd\") pod \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.874532 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-run-ovn\") pod \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.874553 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-host-run-netns\") pod \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.874575 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-host-run-ovn-kubernetes\") pod \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.874598 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-node-log\") pod \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.874645 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-log-socket\") pod \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.874667 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-run-openvswitch\") pod \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.874696 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-var-lib-openvswitch\") pod \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.874721 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmls5\" (UniqueName: \"kubernetes.io/projected/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-kube-api-access-dmls5\") pod \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.874751 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-ovnkube-script-lib\") pod \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.874776 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-host-slash\") pod \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.874797 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-run-systemd\") pod \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.874822 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-systemd-units\") pod \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\" (UID: \"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b\") " Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.875142 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" (UID: "3d685ba5-5ff5-4e74-8d02-99a233fc6c9b"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.875638 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" (UID: "3d685ba5-5ff5-4e74-8d02-99a233fc6c9b"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.875929 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" (UID: "3d685ba5-5ff5-4e74-8d02-99a233fc6c9b"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.875973 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" (UID: "3d685ba5-5ff5-4e74-8d02-99a233fc6c9b"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.876011 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" (UID: "3d685ba5-5ff5-4e74-8d02-99a233fc6c9b"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.889078 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" (UID: "3d685ba5-5ff5-4e74-8d02-99a233fc6c9b"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.889180 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" (UID: "3d685ba5-5ff5-4e74-8d02-99a233fc6c9b"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.889208 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" (UID: "3d685ba5-5ff5-4e74-8d02-99a233fc6c9b"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.889233 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" (UID: "3d685ba5-5ff5-4e74-8d02-99a233fc6c9b"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.889254 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" (UID: "3d685ba5-5ff5-4e74-8d02-99a233fc6c9b"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.889277 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" (UID: "3d685ba5-5ff5-4e74-8d02-99a233fc6c9b"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.889301 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-node-log" (OuterVolumeSpecName: "node-log") pod "3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" (UID: "3d685ba5-5ff5-4e74-8d02-99a233fc6c9b"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.889329 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-log-socket" (OuterVolumeSpecName: "log-socket") pod "3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" (UID: "3d685ba5-5ff5-4e74-8d02-99a233fc6c9b"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.889354 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" (UID: "3d685ba5-5ff5-4e74-8d02-99a233fc6c9b"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.889373 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" (UID: "3d685ba5-5ff5-4e74-8d02-99a233fc6c9b"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.894319 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qxqmx"] Jan 31 03:59:21 crc kubenswrapper[4667]: E0131 03:59:21.894544 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="ovn-acl-logging" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.894565 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="ovn-acl-logging" Jan 31 03:59:21 crc kubenswrapper[4667]: E0131 03:59:21.894577 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="kubecfg-setup" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.894585 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="kubecfg-setup" Jan 31 03:59:21 crc kubenswrapper[4667]: E0131 03:59:21.894593 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="ovnkube-controller" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.894600 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="ovnkube-controller" Jan 31 03:59:21 crc kubenswrapper[4667]: E0131 03:59:21.894613 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="nbdb" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.894620 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="nbdb" Jan 31 03:59:21 crc kubenswrapper[4667]: E0131 03:59:21.894630 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="sbdb" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.894636 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="sbdb" Jan 31 03:59:21 crc kubenswrapper[4667]: E0131 03:59:21.894648 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="ovnkube-controller" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.894653 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="ovnkube-controller" Jan 31 03:59:21 crc kubenswrapper[4667]: E0131 03:59:21.894661 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="ovn-controller" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.894667 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="ovn-controller" Jan 31 03:59:21 crc kubenswrapper[4667]: E0131 03:59:21.894676 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="kube-rbac-proxy-ovn-metrics" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.894682 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="kube-rbac-proxy-ovn-metrics" Jan 31 03:59:21 crc kubenswrapper[4667]: E0131 03:59:21.894688 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="northd" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.894694 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="northd" Jan 31 03:59:21 crc kubenswrapper[4667]: E0131 03:59:21.894700 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="kube-rbac-proxy-node" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.894707 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="kube-rbac-proxy-node" Jan 31 03:59:21 crc kubenswrapper[4667]: E0131 03:59:21.894715 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="ovnkube-controller" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.894720 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="ovnkube-controller" Jan 31 03:59:21 crc kubenswrapper[4667]: E0131 03:59:21.894729 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="ovnkube-controller" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.894735 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="ovnkube-controller" Jan 31 03:59:21 crc kubenswrapper[4667]: E0131 03:59:21.894742 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="ovnkube-controller" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.894748 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="ovnkube-controller" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.894860 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="ovnkube-controller" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.894869 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="ovn-controller" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.894878 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="kube-rbac-proxy-node" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.894884 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="ovnkube-controller" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.894892 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="nbdb" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.894903 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="kube-rbac-proxy-ovn-metrics" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.894915 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="ovnkube-controller" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.894924 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="ovnkube-controller" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.894934 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="sbdb" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.894944 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="ovn-acl-logging" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.894951 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="northd" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.895131 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerName="ovnkube-controller" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.927041 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-host-slash" (OuterVolumeSpecName: "host-slash") pod "3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" (UID: "3d685ba5-5ff5-4e74-8d02-99a233fc6c9b"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.927992 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" (UID: "3d685ba5-5ff5-4e74-8d02-99a233fc6c9b"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.929057 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-kube-api-access-dmls5" (OuterVolumeSpecName: "kube-api-access-dmls5") pod "3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" (UID: "3d685ba5-5ff5-4e74-8d02-99a233fc6c9b"). InnerVolumeSpecName "kube-api-access-dmls5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.939045 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" (UID: "3d685ba5-5ff5-4e74-8d02-99a233fc6c9b"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.944032 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.951371 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" (UID: "3d685ba5-5ff5-4e74-8d02-99a233fc6c9b"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.976608 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-ovnkube-script-lib\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.976664 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-host-kubelet\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.976691 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-host-run-ovn-kubernetes\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.976964 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-ovn-node-metrics-cert\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.977129 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-node-log\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.977181 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-host-run-netns\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.977202 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-host-slash\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.977311 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-host-cni-bin\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.977348 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-etc-openvswitch\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.977371 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-env-overrides\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.977396 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-run-ovn\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.977424 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-systemd-units\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.977465 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twl6j\" (UniqueName: \"kubernetes.io/projected/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-kube-api-access-twl6j\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.977515 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-run-systemd\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.977556 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.977582 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-host-cni-netd\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.977610 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-var-lib-openvswitch\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.977721 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-run-openvswitch\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.977762 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-ovnkube-config\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.977822 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-log-socket\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.977920 4667 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.977935 4667 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.977947 4667 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.977958 4667 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.977968 4667 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-node-log\") on node \"crc\" DevicePath \"\"" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.977976 4667 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-log-socket\") on node \"crc\" DevicePath \"\"" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.977985 4667 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.977994 4667 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.978002 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmls5\" (UniqueName: \"kubernetes.io/projected/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-kube-api-access-dmls5\") on node \"crc\" DevicePath \"\"" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.978011 4667 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.978020 4667 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-host-slash\") on node \"crc\" DevicePath \"\"" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.978028 4667 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.978037 4667 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.978046 4667 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.978054 4667 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.978063 4667 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.978072 4667 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.978082 4667 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.978091 4667 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 31 03:59:21 crc kubenswrapper[4667]: I0131 03:59:21.978099 4667 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.078925 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-host-cni-bin\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.078964 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-etc-openvswitch\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.078983 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-env-overrides\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.079000 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-run-ovn\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.079019 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-systemd-units\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.079045 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twl6j\" (UniqueName: \"kubernetes.io/projected/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-kube-api-access-twl6j\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.079060 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-run-systemd\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.079083 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.079100 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-host-cni-netd\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.079136 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-var-lib-openvswitch\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.079124 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-etc-openvswitch\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.079173 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-run-openvswitch\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.079215 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.079248 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-host-cni-netd\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.079282 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-var-lib-openvswitch\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.079152 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-host-cni-bin\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.079280 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-systemd-units\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.079341 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-run-ovn\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.079227 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-run-openvswitch\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.079324 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-run-systemd\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.079380 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-ovnkube-config\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.079497 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-log-socket\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.079515 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-ovnkube-script-lib\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.079531 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-host-run-ovn-kubernetes\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.079546 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-host-kubelet\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.079564 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-ovn-node-metrics-cert\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.079581 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-node-log\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.079602 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-host-run-netns\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.079621 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-host-slash\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.079664 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-host-slash\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.079687 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-log-socket\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.079710 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-host-kubelet\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.079962 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-env-overrides\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.080046 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-node-log\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.080082 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-host-run-netns\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.080117 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-host-run-ovn-kubernetes\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.080592 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-ovnkube-script-lib\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.080664 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-ovnkube-config\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.082899 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-ovn-node-metrics-cert\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.107635 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twl6j\" (UniqueName: \"kubernetes.io/projected/494cc7d5-48fc-4ffc-acf5-fb69642af4ca-kube-api-access-twl6j\") pod \"ovnkube-node-qxqmx\" (UID: \"494cc7d5-48fc-4ffc-acf5-fb69642af4ca\") " pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.258642 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jhj5n_3d685ba5-5ff5-4e74-8d02-99a233fc6c9b/ovnkube-controller/3.log" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.261406 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jhj5n_3d685ba5-5ff5-4e74-8d02-99a233fc6c9b/ovn-acl-logging/0.log" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.262124 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jhj5n_3d685ba5-5ff5-4e74-8d02-99a233fc6c9b/ovn-controller/0.log" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.262542 4667 generic.go:334] "Generic (PLEG): container finished" podID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerID="506293b7d928fe97c6b77d9109ec52621924d8d435d257363cf0fbd2e4b95a1b" exitCode=0 Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.262570 4667 generic.go:334] "Generic (PLEG): container finished" podID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerID="c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d" exitCode=0 Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.262579 4667 generic.go:334] "Generic (PLEG): container finished" podID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerID="0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce" exitCode=0 Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.262587 4667 generic.go:334] "Generic (PLEG): container finished" podID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" containerID="0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e" exitCode=0 Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.262640 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" event={"ID":"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b","Type":"ContainerDied","Data":"506293b7d928fe97c6b77d9109ec52621924d8d435d257363cf0fbd2e4b95a1b"} Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.262673 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" event={"ID":"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b","Type":"ContainerDied","Data":"c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d"} Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.262686 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" event={"ID":"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b","Type":"ContainerDied","Data":"0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce"} Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.262696 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" event={"ID":"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b","Type":"ContainerDied","Data":"0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e"} Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.262709 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" event={"ID":"3d685ba5-5ff5-4e74-8d02-99a233fc6c9b","Type":"ContainerDied","Data":"61ef93122274eda5482aad4c31f67d4e8d20c68d3174cfcb2086cd35574e727e"} Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.262730 4667 scope.go:117] "RemoveContainer" containerID="506293b7d928fe97c6b77d9109ec52621924d8d435d257363cf0fbd2e4b95a1b" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.262727 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jhj5n" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.265347 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cd764_b069c8d1-f785-4509-8ee6-7d44525bdc89/kube-multus/2.log" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.276099 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.284130 4667 scope.go:117] "RemoveContainer" containerID="f8ea9d94faf102adf3e8e0c6c13fc20da919f3b287704731c53453ac9fa045f2" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.296519 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jhj5n"] Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.309300 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jhj5n"] Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.317398 4667 scope.go:117] "RemoveContainer" containerID="c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.377937 4667 scope.go:117] "RemoveContainer" containerID="0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.406835 4667 scope.go:117] "RemoveContainer" containerID="0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.441095 4667 scope.go:117] "RemoveContainer" containerID="0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.458254 4667 scope.go:117] "RemoveContainer" containerID="70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.474051 4667 scope.go:117] "RemoveContainer" containerID="e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.491945 4667 scope.go:117] "RemoveContainer" containerID="332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.514075 4667 scope.go:117] "RemoveContainer" containerID="3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.563821 4667 scope.go:117] "RemoveContainer" containerID="506293b7d928fe97c6b77d9109ec52621924d8d435d257363cf0fbd2e4b95a1b" Jan 31 03:59:22 crc kubenswrapper[4667]: E0131 03:59:22.564384 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"506293b7d928fe97c6b77d9109ec52621924d8d435d257363cf0fbd2e4b95a1b\": container with ID starting with 506293b7d928fe97c6b77d9109ec52621924d8d435d257363cf0fbd2e4b95a1b not found: ID does not exist" containerID="506293b7d928fe97c6b77d9109ec52621924d8d435d257363cf0fbd2e4b95a1b" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.564446 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"506293b7d928fe97c6b77d9109ec52621924d8d435d257363cf0fbd2e4b95a1b"} err="failed to get container status \"506293b7d928fe97c6b77d9109ec52621924d8d435d257363cf0fbd2e4b95a1b\": rpc error: code = NotFound desc = could not find container \"506293b7d928fe97c6b77d9109ec52621924d8d435d257363cf0fbd2e4b95a1b\": container with ID starting with 506293b7d928fe97c6b77d9109ec52621924d8d435d257363cf0fbd2e4b95a1b not found: ID does not exist" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.564490 4667 scope.go:117] "RemoveContainer" containerID="f8ea9d94faf102adf3e8e0c6c13fc20da919f3b287704731c53453ac9fa045f2" Jan 31 03:59:22 crc kubenswrapper[4667]: E0131 03:59:22.564917 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8ea9d94faf102adf3e8e0c6c13fc20da919f3b287704731c53453ac9fa045f2\": container with ID starting with f8ea9d94faf102adf3e8e0c6c13fc20da919f3b287704731c53453ac9fa045f2 not found: ID does not exist" containerID="f8ea9d94faf102adf3e8e0c6c13fc20da919f3b287704731c53453ac9fa045f2" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.564940 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8ea9d94faf102adf3e8e0c6c13fc20da919f3b287704731c53453ac9fa045f2"} err="failed to get container status \"f8ea9d94faf102adf3e8e0c6c13fc20da919f3b287704731c53453ac9fa045f2\": rpc error: code = NotFound desc = could not find container \"f8ea9d94faf102adf3e8e0c6c13fc20da919f3b287704731c53453ac9fa045f2\": container with ID starting with f8ea9d94faf102adf3e8e0c6c13fc20da919f3b287704731c53453ac9fa045f2 not found: ID does not exist" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.564959 4667 scope.go:117] "RemoveContainer" containerID="c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d" Jan 31 03:59:22 crc kubenswrapper[4667]: E0131 03:59:22.565654 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d\": container with ID starting with c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d not found: ID does not exist" containerID="c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.565787 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d"} err="failed to get container status \"c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d\": rpc error: code = NotFound desc = could not find container \"c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d\": container with ID starting with c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d not found: ID does not exist" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.565985 4667 scope.go:117] "RemoveContainer" containerID="0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce" Jan 31 03:59:22 crc kubenswrapper[4667]: E0131 03:59:22.566454 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce\": container with ID starting with 0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce not found: ID does not exist" containerID="0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.566482 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce"} err="failed to get container status \"0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce\": rpc error: code = NotFound desc = could not find container \"0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce\": container with ID starting with 0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce not found: ID does not exist" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.566499 4667 scope.go:117] "RemoveContainer" containerID="0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e" Jan 31 03:59:22 crc kubenswrapper[4667]: E0131 03:59:22.567224 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e\": container with ID starting with 0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e not found: ID does not exist" containerID="0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.567351 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e"} err="failed to get container status \"0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e\": rpc error: code = NotFound desc = could not find container \"0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e\": container with ID starting with 0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e not found: ID does not exist" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.567450 4667 scope.go:117] "RemoveContainer" containerID="0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91" Jan 31 03:59:22 crc kubenswrapper[4667]: E0131 03:59:22.567871 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91\": container with ID starting with 0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91 not found: ID does not exist" containerID="0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.567919 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91"} err="failed to get container status \"0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91\": rpc error: code = NotFound desc = could not find container \"0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91\": container with ID starting with 0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91 not found: ID does not exist" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.567958 4667 scope.go:117] "RemoveContainer" containerID="70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9" Jan 31 03:59:22 crc kubenswrapper[4667]: E0131 03:59:22.568808 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9\": container with ID starting with 70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9 not found: ID does not exist" containerID="70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.568977 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9"} err="failed to get container status \"70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9\": rpc error: code = NotFound desc = could not find container \"70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9\": container with ID starting with 70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9 not found: ID does not exist" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.569202 4667 scope.go:117] "RemoveContainer" containerID="e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3" Jan 31 03:59:22 crc kubenswrapper[4667]: E0131 03:59:22.570258 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3\": container with ID starting with e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3 not found: ID does not exist" containerID="e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.570298 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3"} err="failed to get container status \"e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3\": rpc error: code = NotFound desc = could not find container \"e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3\": container with ID starting with e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3 not found: ID does not exist" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.570351 4667 scope.go:117] "RemoveContainer" containerID="332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc" Jan 31 03:59:22 crc kubenswrapper[4667]: E0131 03:59:22.571042 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc\": container with ID starting with 332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc not found: ID does not exist" containerID="332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.571194 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc"} err="failed to get container status \"332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc\": rpc error: code = NotFound desc = could not find container \"332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc\": container with ID starting with 332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc not found: ID does not exist" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.571283 4667 scope.go:117] "RemoveContainer" containerID="3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9" Jan 31 03:59:22 crc kubenswrapper[4667]: E0131 03:59:22.571786 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\": container with ID starting with 3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9 not found: ID does not exist" containerID="3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.571865 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9"} err="failed to get container status \"3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\": rpc error: code = NotFound desc = could not find container \"3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\": container with ID starting with 3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9 not found: ID does not exist" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.571899 4667 scope.go:117] "RemoveContainer" containerID="506293b7d928fe97c6b77d9109ec52621924d8d435d257363cf0fbd2e4b95a1b" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.572425 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"506293b7d928fe97c6b77d9109ec52621924d8d435d257363cf0fbd2e4b95a1b"} err="failed to get container status \"506293b7d928fe97c6b77d9109ec52621924d8d435d257363cf0fbd2e4b95a1b\": rpc error: code = NotFound desc = could not find container \"506293b7d928fe97c6b77d9109ec52621924d8d435d257363cf0fbd2e4b95a1b\": container with ID starting with 506293b7d928fe97c6b77d9109ec52621924d8d435d257363cf0fbd2e4b95a1b not found: ID does not exist" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.572448 4667 scope.go:117] "RemoveContainer" containerID="f8ea9d94faf102adf3e8e0c6c13fc20da919f3b287704731c53453ac9fa045f2" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.572770 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8ea9d94faf102adf3e8e0c6c13fc20da919f3b287704731c53453ac9fa045f2"} err="failed to get container status \"f8ea9d94faf102adf3e8e0c6c13fc20da919f3b287704731c53453ac9fa045f2\": rpc error: code = NotFound desc = could not find container \"f8ea9d94faf102adf3e8e0c6c13fc20da919f3b287704731c53453ac9fa045f2\": container with ID starting with f8ea9d94faf102adf3e8e0c6c13fc20da919f3b287704731c53453ac9fa045f2 not found: ID does not exist" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.572801 4667 scope.go:117] "RemoveContainer" containerID="c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.573481 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d"} err="failed to get container status \"c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d\": rpc error: code = NotFound desc = could not find container \"c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d\": container with ID starting with c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d not found: ID does not exist" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.573703 4667 scope.go:117] "RemoveContainer" containerID="0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.574206 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce"} err="failed to get container status \"0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce\": rpc error: code = NotFound desc = could not find container \"0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce\": container with ID starting with 0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce not found: ID does not exist" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.574246 4667 scope.go:117] "RemoveContainer" containerID="0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.574641 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e"} err="failed to get container status \"0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e\": rpc error: code = NotFound desc = could not find container \"0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e\": container with ID starting with 0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e not found: ID does not exist" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.574751 4667 scope.go:117] "RemoveContainer" containerID="0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.575260 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91"} err="failed to get container status \"0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91\": rpc error: code = NotFound desc = could not find container \"0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91\": container with ID starting with 0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91 not found: ID does not exist" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.575288 4667 scope.go:117] "RemoveContainer" containerID="70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.575977 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9"} err="failed to get container status \"70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9\": rpc error: code = NotFound desc = could not find container \"70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9\": container with ID starting with 70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9 not found: ID does not exist" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.576010 4667 scope.go:117] "RemoveContainer" containerID="e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.576508 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3"} err="failed to get container status \"e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3\": rpc error: code = NotFound desc = could not find container \"e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3\": container with ID starting with e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3 not found: ID does not exist" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.576535 4667 scope.go:117] "RemoveContainer" containerID="332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.577025 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc"} err="failed to get container status \"332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc\": rpc error: code = NotFound desc = could not find container \"332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc\": container with ID starting with 332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc not found: ID does not exist" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.577133 4667 scope.go:117] "RemoveContainer" containerID="3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.577504 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9"} err="failed to get container status \"3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\": rpc error: code = NotFound desc = could not find container \"3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\": container with ID starting with 3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9 not found: ID does not exist" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.577535 4667 scope.go:117] "RemoveContainer" containerID="506293b7d928fe97c6b77d9109ec52621924d8d435d257363cf0fbd2e4b95a1b" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.577983 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"506293b7d928fe97c6b77d9109ec52621924d8d435d257363cf0fbd2e4b95a1b"} err="failed to get container status \"506293b7d928fe97c6b77d9109ec52621924d8d435d257363cf0fbd2e4b95a1b\": rpc error: code = NotFound desc = could not find container \"506293b7d928fe97c6b77d9109ec52621924d8d435d257363cf0fbd2e4b95a1b\": container with ID starting with 506293b7d928fe97c6b77d9109ec52621924d8d435d257363cf0fbd2e4b95a1b not found: ID does not exist" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.578009 4667 scope.go:117] "RemoveContainer" containerID="f8ea9d94faf102adf3e8e0c6c13fc20da919f3b287704731c53453ac9fa045f2" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.578364 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8ea9d94faf102adf3e8e0c6c13fc20da919f3b287704731c53453ac9fa045f2"} err="failed to get container status \"f8ea9d94faf102adf3e8e0c6c13fc20da919f3b287704731c53453ac9fa045f2\": rpc error: code = NotFound desc = could not find container \"f8ea9d94faf102adf3e8e0c6c13fc20da919f3b287704731c53453ac9fa045f2\": container with ID starting with f8ea9d94faf102adf3e8e0c6c13fc20da919f3b287704731c53453ac9fa045f2 not found: ID does not exist" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.578461 4667 scope.go:117] "RemoveContainer" containerID="c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.579148 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d"} err="failed to get container status \"c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d\": rpc error: code = NotFound desc = could not find container \"c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d\": container with ID starting with c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d not found: ID does not exist" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.579171 4667 scope.go:117] "RemoveContainer" containerID="0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.579462 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce"} err="failed to get container status \"0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce\": rpc error: code = NotFound desc = could not find container \"0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce\": container with ID starting with 0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce not found: ID does not exist" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.579609 4667 scope.go:117] "RemoveContainer" containerID="0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.580085 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e"} err="failed to get container status \"0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e\": rpc error: code = NotFound desc = could not find container \"0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e\": container with ID starting with 0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e not found: ID does not exist" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.580189 4667 scope.go:117] "RemoveContainer" containerID="0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.580551 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91"} err="failed to get container status \"0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91\": rpc error: code = NotFound desc = could not find container \"0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91\": container with ID starting with 0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91 not found: ID does not exist" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.580575 4667 scope.go:117] "RemoveContainer" containerID="70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.581101 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9"} err="failed to get container status \"70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9\": rpc error: code = NotFound desc = could not find container \"70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9\": container with ID starting with 70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9 not found: ID does not exist" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.581243 4667 scope.go:117] "RemoveContainer" containerID="e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.581858 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3"} err="failed to get container status \"e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3\": rpc error: code = NotFound desc = could not find container \"e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3\": container with ID starting with e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3 not found: ID does not exist" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.581959 4667 scope.go:117] "RemoveContainer" containerID="332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.582513 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc"} err="failed to get container status \"332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc\": rpc error: code = NotFound desc = could not find container \"332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc\": container with ID starting with 332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc not found: ID does not exist" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.582538 4667 scope.go:117] "RemoveContainer" containerID="3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.582991 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9"} err="failed to get container status \"3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\": rpc error: code = NotFound desc = could not find container \"3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\": container with ID starting with 3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9 not found: ID does not exist" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.583153 4667 scope.go:117] "RemoveContainer" containerID="506293b7d928fe97c6b77d9109ec52621924d8d435d257363cf0fbd2e4b95a1b" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.583665 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"506293b7d928fe97c6b77d9109ec52621924d8d435d257363cf0fbd2e4b95a1b"} err="failed to get container status \"506293b7d928fe97c6b77d9109ec52621924d8d435d257363cf0fbd2e4b95a1b\": rpc error: code = NotFound desc = could not find container \"506293b7d928fe97c6b77d9109ec52621924d8d435d257363cf0fbd2e4b95a1b\": container with ID starting with 506293b7d928fe97c6b77d9109ec52621924d8d435d257363cf0fbd2e4b95a1b not found: ID does not exist" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.583728 4667 scope.go:117] "RemoveContainer" containerID="f8ea9d94faf102adf3e8e0c6c13fc20da919f3b287704731c53453ac9fa045f2" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.584130 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8ea9d94faf102adf3e8e0c6c13fc20da919f3b287704731c53453ac9fa045f2"} err="failed to get container status \"f8ea9d94faf102adf3e8e0c6c13fc20da919f3b287704731c53453ac9fa045f2\": rpc error: code = NotFound desc = could not find container \"f8ea9d94faf102adf3e8e0c6c13fc20da919f3b287704731c53453ac9fa045f2\": container with ID starting with f8ea9d94faf102adf3e8e0c6c13fc20da919f3b287704731c53453ac9fa045f2 not found: ID does not exist" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.584296 4667 scope.go:117] "RemoveContainer" containerID="c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.584884 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d"} err="failed to get container status \"c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d\": rpc error: code = NotFound desc = could not find container \"c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d\": container with ID starting with c19a62fc19c6397794ed791657ceb65beaa946c6107106e9b49d10bddc85356d not found: ID does not exist" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.584910 4667 scope.go:117] "RemoveContainer" containerID="0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.585503 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce"} err="failed to get container status \"0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce\": rpc error: code = NotFound desc = could not find container \"0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce\": container with ID starting with 0a1af13fdf1dbe49dc0981f9cdbd6402104102c3d936f0dbf877c75f706db0ce not found: ID does not exist" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.585588 4667 scope.go:117] "RemoveContainer" containerID="0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.586183 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e"} err="failed to get container status \"0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e\": rpc error: code = NotFound desc = could not find container \"0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e\": container with ID starting with 0ee6f61d6fa19e4c15027fd126e5f74fed2b64ff45fd5381bf69980a2564d95e not found: ID does not exist" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.586271 4667 scope.go:117] "RemoveContainer" containerID="0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.586693 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91"} err="failed to get container status \"0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91\": rpc error: code = NotFound desc = could not find container \"0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91\": container with ID starting with 0751def9e846d03ea2f4c54c7b5c83ac94d553ac6f874ea8a5a4b714fd43ae91 not found: ID does not exist" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.586757 4667 scope.go:117] "RemoveContainer" containerID="70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.587241 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9"} err="failed to get container status \"70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9\": rpc error: code = NotFound desc = could not find container \"70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9\": container with ID starting with 70c088d111202a365c7f09bf143166a7325a8c7a60e158ff94d0b08b432f87d9 not found: ID does not exist" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.587340 4667 scope.go:117] "RemoveContainer" containerID="e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.587883 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3"} err="failed to get container status \"e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3\": rpc error: code = NotFound desc = could not find container \"e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3\": container with ID starting with e4a347fa94949443ee97fa335dfeb43ab031aeb511e27a19e76e9082ed2d0ec3 not found: ID does not exist" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.587959 4667 scope.go:117] "RemoveContainer" containerID="332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.588283 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc"} err="failed to get container status \"332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc\": rpc error: code = NotFound desc = could not find container \"332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc\": container with ID starting with 332ee4ddad35175a2ce12c037ab5906b5f99616dcbb90a9f04c0239644bc94bc not found: ID does not exist" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.588321 4667 scope.go:117] "RemoveContainer" containerID="3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9" Jan 31 03:59:22 crc kubenswrapper[4667]: I0131 03:59:22.588758 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9"} err="failed to get container status \"3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\": rpc error: code = NotFound desc = could not find container \"3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9\": container with ID starting with 3856ec9bc77627ac1e9ffbbc8b123cd720b8e7b7e0b559a6a9c75d2edde5fda9 not found: ID does not exist" Jan 31 03:59:23 crc kubenswrapper[4667]: I0131 03:59:23.276257 4667 generic.go:334] "Generic (PLEG): container finished" podID="494cc7d5-48fc-4ffc-acf5-fb69642af4ca" containerID="226085eaded1ba6d3eab68d2c7cd9ab0c88a2c006aaab3e1080859210ff2b91d" exitCode=0 Jan 31 03:59:23 crc kubenswrapper[4667]: I0131 03:59:23.276318 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" event={"ID":"494cc7d5-48fc-4ffc-acf5-fb69642af4ca","Type":"ContainerDied","Data":"226085eaded1ba6d3eab68d2c7cd9ab0c88a2c006aaab3e1080859210ff2b91d"} Jan 31 03:59:23 crc kubenswrapper[4667]: I0131 03:59:23.276354 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" event={"ID":"494cc7d5-48fc-4ffc-acf5-fb69642af4ca","Type":"ContainerStarted","Data":"8ed42c9f41f1e0adbc7fc08d0af57b909f105515ebcf387cac965c1de5611a5a"} Jan 31 03:59:23 crc kubenswrapper[4667]: I0131 03:59:23.290464 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d685ba5-5ff5-4e74-8d02-99a233fc6c9b" path="/var/lib/kubelet/pods/3d685ba5-5ff5-4e74-8d02-99a233fc6c9b/volumes" Jan 31 03:59:24 crc kubenswrapper[4667]: I0131 03:59:24.288461 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" event={"ID":"494cc7d5-48fc-4ffc-acf5-fb69642af4ca","Type":"ContainerStarted","Data":"aea324c77bc0dbce97bdb27b0f03ee10b6e6bb60e35d15b33c62e5cb791e8160"} Jan 31 03:59:24 crc kubenswrapper[4667]: I0131 03:59:24.289010 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" event={"ID":"494cc7d5-48fc-4ffc-acf5-fb69642af4ca","Type":"ContainerStarted","Data":"ab3e92464b7e80470cdf8fd34105d5d4171cee6c418a1c124635590d5b423cd6"} Jan 31 03:59:24 crc kubenswrapper[4667]: I0131 03:59:24.289035 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" event={"ID":"494cc7d5-48fc-4ffc-acf5-fb69642af4ca","Type":"ContainerStarted","Data":"8fb274383b733212dd6ca1fff99dd9a276fd45695be65d1a9558ecacfcc12a1c"} Jan 31 03:59:24 crc kubenswrapper[4667]: I0131 03:59:24.289054 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" event={"ID":"494cc7d5-48fc-4ffc-acf5-fb69642af4ca","Type":"ContainerStarted","Data":"d450041a4f0c3099083cba205042026c3f3f1c2fb5f19feff5142f0435252bee"} Jan 31 03:59:24 crc kubenswrapper[4667]: I0131 03:59:24.289073 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" event={"ID":"494cc7d5-48fc-4ffc-acf5-fb69642af4ca","Type":"ContainerStarted","Data":"a0b627a11179d1ed75ddbcf822b418c383a1da8f2c6fade044e5c11f3ec5cf02"} Jan 31 03:59:24 crc kubenswrapper[4667]: I0131 03:59:24.289089 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" event={"ID":"494cc7d5-48fc-4ffc-acf5-fb69642af4ca","Type":"ContainerStarted","Data":"b9ecdf58f4399c41558b5d85590729ed04e8ad2b4ed891445a8f05b534c70cf6"} Jan 31 03:59:26 crc kubenswrapper[4667]: I0131 03:59:26.306582 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" event={"ID":"494cc7d5-48fc-4ffc-acf5-fb69642af4ca","Type":"ContainerStarted","Data":"24697d2d092981c9fe139a255a3997c9637f751a793fcdd44b09a89d73cfde14"} Jan 31 03:59:29 crc kubenswrapper[4667]: I0131 03:59:29.328894 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" event={"ID":"494cc7d5-48fc-4ffc-acf5-fb69642af4ca","Type":"ContainerStarted","Data":"afb7b4a037cf943e2acb00495de8ffd7d1699f7beee1565756b15c1aafe753b2"} Jan 31 03:59:29 crc kubenswrapper[4667]: I0131 03:59:29.329707 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:29 crc kubenswrapper[4667]: I0131 03:59:29.329829 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:29 crc kubenswrapper[4667]: I0131 03:59:29.329917 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:29 crc kubenswrapper[4667]: I0131 03:59:29.362653 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:29 crc kubenswrapper[4667]: I0131 03:59:29.367279 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:29 crc kubenswrapper[4667]: I0131 03:59:29.402278 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" podStartSLOduration=8.402256316 podStartE2EDuration="8.402256316s" podCreationTimestamp="2026-01-31 03:59:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 03:59:29.367540337 +0000 UTC m=+692.883875656" watchObservedRunningTime="2026-01-31 03:59:29.402256316 +0000 UTC m=+692.918591605" Jan 31 03:59:32 crc kubenswrapper[4667]: I0131 03:59:32.282289 4667 scope.go:117] "RemoveContainer" containerID="9984a610f48d7ddbc022492b34bc1a1bd85aab975477a59f5f05018d5841f13a" Jan 31 03:59:32 crc kubenswrapper[4667]: E0131 03:59:32.283869 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-cd764_openshift-multus(b069c8d1-f785-4509-8ee6-7d44525bdc89)\"" pod="openshift-multus/multus-cd764" podUID="b069c8d1-f785-4509-8ee6-7d44525bdc89" Jan 31 03:59:45 crc kubenswrapper[4667]: I0131 03:59:45.283810 4667 scope.go:117] "RemoveContainer" containerID="9984a610f48d7ddbc022492b34bc1a1bd85aab975477a59f5f05018d5841f13a" Jan 31 03:59:46 crc kubenswrapper[4667]: I0131 03:59:46.444762 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cd764_b069c8d1-f785-4509-8ee6-7d44525bdc89/kube-multus/2.log" Jan 31 03:59:46 crc kubenswrapper[4667]: I0131 03:59:46.447178 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cd764" event={"ID":"b069c8d1-f785-4509-8ee6-7d44525bdc89","Type":"ContainerStarted","Data":"5fb35928b8b5bd40faffa538138bfcf205369a6b29837f39258fc7c7b211f934"} Jan 31 03:59:52 crc kubenswrapper[4667]: I0131 03:59:52.318369 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qxqmx" Jan 31 03:59:58 crc kubenswrapper[4667]: I0131 03:59:58.762001 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz"] Jan 31 03:59:58 crc kubenswrapper[4667]: I0131 03:59:58.764385 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz" Jan 31 03:59:58 crc kubenswrapper[4667]: I0131 03:59:58.767005 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 31 03:59:58 crc kubenswrapper[4667]: I0131 03:59:58.773494 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz"] Jan 31 03:59:58 crc kubenswrapper[4667]: I0131 03:59:58.815634 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/12187e5c-4ff4-4ab3-baea-3501646a5c68-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz\" (UID: \"12187e5c-4ff4-4ab3-baea-3501646a5c68\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz" Jan 31 03:59:58 crc kubenswrapper[4667]: I0131 03:59:58.816024 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkbvd\" (UniqueName: \"kubernetes.io/projected/12187e5c-4ff4-4ab3-baea-3501646a5c68-kube-api-access-zkbvd\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz\" (UID: \"12187e5c-4ff4-4ab3-baea-3501646a5c68\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz" Jan 31 03:59:58 crc kubenswrapper[4667]: I0131 03:59:58.816191 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/12187e5c-4ff4-4ab3-baea-3501646a5c68-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz\" (UID: \"12187e5c-4ff4-4ab3-baea-3501646a5c68\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz" Jan 31 03:59:58 crc kubenswrapper[4667]: I0131 03:59:58.917361 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/12187e5c-4ff4-4ab3-baea-3501646a5c68-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz\" (UID: \"12187e5c-4ff4-4ab3-baea-3501646a5c68\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz" Jan 31 03:59:58 crc kubenswrapper[4667]: I0131 03:59:58.917426 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkbvd\" (UniqueName: \"kubernetes.io/projected/12187e5c-4ff4-4ab3-baea-3501646a5c68-kube-api-access-zkbvd\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz\" (UID: \"12187e5c-4ff4-4ab3-baea-3501646a5c68\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz" Jan 31 03:59:58 crc kubenswrapper[4667]: I0131 03:59:58.917501 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/12187e5c-4ff4-4ab3-baea-3501646a5c68-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz\" (UID: \"12187e5c-4ff4-4ab3-baea-3501646a5c68\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz" Jan 31 03:59:58 crc kubenswrapper[4667]: I0131 03:59:58.917998 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/12187e5c-4ff4-4ab3-baea-3501646a5c68-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz\" (UID: \"12187e5c-4ff4-4ab3-baea-3501646a5c68\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz" Jan 31 03:59:58 crc kubenswrapper[4667]: I0131 03:59:58.918111 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/12187e5c-4ff4-4ab3-baea-3501646a5c68-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz\" (UID: \"12187e5c-4ff4-4ab3-baea-3501646a5c68\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz" Jan 31 03:59:58 crc kubenswrapper[4667]: I0131 03:59:58.950309 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkbvd\" (UniqueName: \"kubernetes.io/projected/12187e5c-4ff4-4ab3-baea-3501646a5c68-kube-api-access-zkbvd\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz\" (UID: \"12187e5c-4ff4-4ab3-baea-3501646a5c68\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz" Jan 31 03:59:59 crc kubenswrapper[4667]: I0131 03:59:59.077804 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz" Jan 31 03:59:59 crc kubenswrapper[4667]: I0131 03:59:59.338673 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz"] Jan 31 03:59:59 crc kubenswrapper[4667]: I0131 03:59:59.539079 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz" event={"ID":"12187e5c-4ff4-4ab3-baea-3501646a5c68","Type":"ContainerStarted","Data":"e5cd68d728871ba2896d2beb5bc38f094060bd65262e4c215014cd39567a6b03"} Jan 31 03:59:59 crc kubenswrapper[4667]: I0131 03:59:59.539574 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz" event={"ID":"12187e5c-4ff4-4ab3-baea-3501646a5c68","Type":"ContainerStarted","Data":"ea3d3ba91a83fb042b4a5d0c9b89100f27af8fd377b4f5c4e37741ed0c058cfb"} Jan 31 04:00:00 crc kubenswrapper[4667]: I0131 04:00:00.181741 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497200-jtnvz"] Jan 31 04:00:00 crc kubenswrapper[4667]: I0131 04:00:00.182745 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497200-jtnvz" Jan 31 04:00:00 crc kubenswrapper[4667]: I0131 04:00:00.186345 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 04:00:00 crc kubenswrapper[4667]: I0131 04:00:00.186682 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 04:00:00 crc kubenswrapper[4667]: I0131 04:00:00.196404 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497200-jtnvz"] Jan 31 04:00:00 crc kubenswrapper[4667]: I0131 04:00:00.337954 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2j7t\" (UniqueName: \"kubernetes.io/projected/704ddfdd-061e-4dff-a878-3c0755c07a6d-kube-api-access-p2j7t\") pod \"collect-profiles-29497200-jtnvz\" (UID: \"704ddfdd-061e-4dff-a878-3c0755c07a6d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497200-jtnvz" Jan 31 04:00:00 crc kubenswrapper[4667]: I0131 04:00:00.338031 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/704ddfdd-061e-4dff-a878-3c0755c07a6d-config-volume\") pod \"collect-profiles-29497200-jtnvz\" (UID: \"704ddfdd-061e-4dff-a878-3c0755c07a6d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497200-jtnvz" Jan 31 04:00:00 crc kubenswrapper[4667]: I0131 04:00:00.338072 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/704ddfdd-061e-4dff-a878-3c0755c07a6d-secret-volume\") pod \"collect-profiles-29497200-jtnvz\" (UID: \"704ddfdd-061e-4dff-a878-3c0755c07a6d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497200-jtnvz" Jan 31 04:00:00 crc kubenswrapper[4667]: I0131 04:00:00.439784 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/704ddfdd-061e-4dff-a878-3c0755c07a6d-config-volume\") pod \"collect-profiles-29497200-jtnvz\" (UID: \"704ddfdd-061e-4dff-a878-3c0755c07a6d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497200-jtnvz" Jan 31 04:00:00 crc kubenswrapper[4667]: I0131 04:00:00.441034 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/704ddfdd-061e-4dff-a878-3c0755c07a6d-secret-volume\") pod \"collect-profiles-29497200-jtnvz\" (UID: \"704ddfdd-061e-4dff-a878-3c0755c07a6d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497200-jtnvz" Jan 31 04:00:00 crc kubenswrapper[4667]: I0131 04:00:00.441407 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/704ddfdd-061e-4dff-a878-3c0755c07a6d-config-volume\") pod \"collect-profiles-29497200-jtnvz\" (UID: \"704ddfdd-061e-4dff-a878-3c0755c07a6d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497200-jtnvz" Jan 31 04:00:00 crc kubenswrapper[4667]: I0131 04:00:00.441541 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2j7t\" (UniqueName: \"kubernetes.io/projected/704ddfdd-061e-4dff-a878-3c0755c07a6d-kube-api-access-p2j7t\") pod \"collect-profiles-29497200-jtnvz\" (UID: \"704ddfdd-061e-4dff-a878-3c0755c07a6d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497200-jtnvz" Jan 31 04:00:00 crc kubenswrapper[4667]: I0131 04:00:00.451048 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/704ddfdd-061e-4dff-a878-3c0755c07a6d-secret-volume\") pod \"collect-profiles-29497200-jtnvz\" (UID: \"704ddfdd-061e-4dff-a878-3c0755c07a6d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497200-jtnvz" Jan 31 04:00:00 crc kubenswrapper[4667]: I0131 04:00:00.470184 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2j7t\" (UniqueName: \"kubernetes.io/projected/704ddfdd-061e-4dff-a878-3c0755c07a6d-kube-api-access-p2j7t\") pod \"collect-profiles-29497200-jtnvz\" (UID: \"704ddfdd-061e-4dff-a878-3c0755c07a6d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497200-jtnvz" Jan 31 04:00:00 crc kubenswrapper[4667]: I0131 04:00:00.510790 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497200-jtnvz" Jan 31 04:00:00 crc kubenswrapper[4667]: I0131 04:00:00.551161 4667 generic.go:334] "Generic (PLEG): container finished" podID="12187e5c-4ff4-4ab3-baea-3501646a5c68" containerID="e5cd68d728871ba2896d2beb5bc38f094060bd65262e4c215014cd39567a6b03" exitCode=0 Jan 31 04:00:00 crc kubenswrapper[4667]: I0131 04:00:00.551225 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz" event={"ID":"12187e5c-4ff4-4ab3-baea-3501646a5c68","Type":"ContainerDied","Data":"e5cd68d728871ba2896d2beb5bc38f094060bd65262e4c215014cd39567a6b03"} Jan 31 04:00:00 crc kubenswrapper[4667]: I0131 04:00:00.745729 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497200-jtnvz"] Jan 31 04:00:00 crc kubenswrapper[4667]: W0131 04:00:00.747484 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod704ddfdd_061e_4dff_a878_3c0755c07a6d.slice/crio-b920c6a2563ad8b65c9928730b3bd91df4e479a378cf24493c134e04e8770221 WatchSource:0}: Error finding container b920c6a2563ad8b65c9928730b3bd91df4e479a378cf24493c134e04e8770221: Status 404 returned error can't find the container with id b920c6a2563ad8b65c9928730b3bd91df4e479a378cf24493c134e04e8770221 Jan 31 04:00:01 crc kubenswrapper[4667]: I0131 04:00:01.562186 4667 generic.go:334] "Generic (PLEG): container finished" podID="704ddfdd-061e-4dff-a878-3c0755c07a6d" containerID="23c0ec42dbe697eaf0704b00e37db628311cffcda3636e6374cb1523a5355584" exitCode=0 Jan 31 04:00:01 crc kubenswrapper[4667]: I0131 04:00:01.562330 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497200-jtnvz" event={"ID":"704ddfdd-061e-4dff-a878-3c0755c07a6d","Type":"ContainerDied","Data":"23c0ec42dbe697eaf0704b00e37db628311cffcda3636e6374cb1523a5355584"} Jan 31 04:00:01 crc kubenswrapper[4667]: I0131 04:00:01.564084 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497200-jtnvz" event={"ID":"704ddfdd-061e-4dff-a878-3c0755c07a6d","Type":"ContainerStarted","Data":"b920c6a2563ad8b65c9928730b3bd91df4e479a378cf24493c134e04e8770221"} Jan 31 04:00:02 crc kubenswrapper[4667]: I0131 04:00:02.574664 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz" event={"ID":"12187e5c-4ff4-4ab3-baea-3501646a5c68","Type":"ContainerStarted","Data":"f52aa84924803b1a099057283a71b170c2eecdf2d1ba2418763ef3afb493f382"} Jan 31 04:00:02 crc kubenswrapper[4667]: I0131 04:00:02.909379 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497200-jtnvz" Jan 31 04:00:03 crc kubenswrapper[4667]: I0131 04:00:03.077984 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2j7t\" (UniqueName: \"kubernetes.io/projected/704ddfdd-061e-4dff-a878-3c0755c07a6d-kube-api-access-p2j7t\") pod \"704ddfdd-061e-4dff-a878-3c0755c07a6d\" (UID: \"704ddfdd-061e-4dff-a878-3c0755c07a6d\") " Jan 31 04:00:03 crc kubenswrapper[4667]: I0131 04:00:03.078073 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/704ddfdd-061e-4dff-a878-3c0755c07a6d-config-volume\") pod \"704ddfdd-061e-4dff-a878-3c0755c07a6d\" (UID: \"704ddfdd-061e-4dff-a878-3c0755c07a6d\") " Jan 31 04:00:03 crc kubenswrapper[4667]: I0131 04:00:03.078151 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/704ddfdd-061e-4dff-a878-3c0755c07a6d-secret-volume\") pod \"704ddfdd-061e-4dff-a878-3c0755c07a6d\" (UID: \"704ddfdd-061e-4dff-a878-3c0755c07a6d\") " Jan 31 04:00:03 crc kubenswrapper[4667]: I0131 04:00:03.079657 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/704ddfdd-061e-4dff-a878-3c0755c07a6d-config-volume" (OuterVolumeSpecName: "config-volume") pod "704ddfdd-061e-4dff-a878-3c0755c07a6d" (UID: "704ddfdd-061e-4dff-a878-3c0755c07a6d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:00:03 crc kubenswrapper[4667]: I0131 04:00:03.088161 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/704ddfdd-061e-4dff-a878-3c0755c07a6d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "704ddfdd-061e-4dff-a878-3c0755c07a6d" (UID: "704ddfdd-061e-4dff-a878-3c0755c07a6d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:00:03 crc kubenswrapper[4667]: I0131 04:00:03.090355 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/704ddfdd-061e-4dff-a878-3c0755c07a6d-kube-api-access-p2j7t" (OuterVolumeSpecName: "kube-api-access-p2j7t") pod "704ddfdd-061e-4dff-a878-3c0755c07a6d" (UID: "704ddfdd-061e-4dff-a878-3c0755c07a6d"). InnerVolumeSpecName "kube-api-access-p2j7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:00:03 crc kubenswrapper[4667]: I0131 04:00:03.180424 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2j7t\" (UniqueName: \"kubernetes.io/projected/704ddfdd-061e-4dff-a878-3c0755c07a6d-kube-api-access-p2j7t\") on node \"crc\" DevicePath \"\"" Jan 31 04:00:03 crc kubenswrapper[4667]: I0131 04:00:03.181603 4667 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/704ddfdd-061e-4dff-a878-3c0755c07a6d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 04:00:03 crc kubenswrapper[4667]: I0131 04:00:03.181639 4667 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/704ddfdd-061e-4dff-a878-3c0755c07a6d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 04:00:03 crc kubenswrapper[4667]: I0131 04:00:03.584163 4667 generic.go:334] "Generic (PLEG): container finished" podID="12187e5c-4ff4-4ab3-baea-3501646a5c68" containerID="f52aa84924803b1a099057283a71b170c2eecdf2d1ba2418763ef3afb493f382" exitCode=0 Jan 31 04:00:03 crc kubenswrapper[4667]: I0131 04:00:03.585001 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz" event={"ID":"12187e5c-4ff4-4ab3-baea-3501646a5c68","Type":"ContainerDied","Data":"f52aa84924803b1a099057283a71b170c2eecdf2d1ba2418763ef3afb493f382"} Jan 31 04:00:03 crc kubenswrapper[4667]: I0131 04:00:03.593987 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497200-jtnvz" event={"ID":"704ddfdd-061e-4dff-a878-3c0755c07a6d","Type":"ContainerDied","Data":"b920c6a2563ad8b65c9928730b3bd91df4e479a378cf24493c134e04e8770221"} Jan 31 04:00:03 crc kubenswrapper[4667]: I0131 04:00:03.594237 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b920c6a2563ad8b65c9928730b3bd91df4e479a378cf24493c134e04e8770221" Jan 31 04:00:03 crc kubenswrapper[4667]: I0131 04:00:03.594490 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497200-jtnvz" Jan 31 04:00:04 crc kubenswrapper[4667]: I0131 04:00:04.610969 4667 generic.go:334] "Generic (PLEG): container finished" podID="12187e5c-4ff4-4ab3-baea-3501646a5c68" containerID="a58ff65d5ad8d9e0db674fe4d5afdd69cf3639808349288f8b3b5c3062c6c02b" exitCode=0 Jan 31 04:00:04 crc kubenswrapper[4667]: I0131 04:00:04.611023 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz" event={"ID":"12187e5c-4ff4-4ab3-baea-3501646a5c68","Type":"ContainerDied","Data":"a58ff65d5ad8d9e0db674fe4d5afdd69cf3639808349288f8b3b5c3062c6c02b"} Jan 31 04:00:05 crc kubenswrapper[4667]: I0131 04:00:05.877900 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz" Jan 31 04:00:06 crc kubenswrapper[4667]: I0131 04:00:06.027032 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/12187e5c-4ff4-4ab3-baea-3501646a5c68-bundle\") pod \"12187e5c-4ff4-4ab3-baea-3501646a5c68\" (UID: \"12187e5c-4ff4-4ab3-baea-3501646a5c68\") " Jan 31 04:00:06 crc kubenswrapper[4667]: I0131 04:00:06.027192 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkbvd\" (UniqueName: \"kubernetes.io/projected/12187e5c-4ff4-4ab3-baea-3501646a5c68-kube-api-access-zkbvd\") pod \"12187e5c-4ff4-4ab3-baea-3501646a5c68\" (UID: \"12187e5c-4ff4-4ab3-baea-3501646a5c68\") " Jan 31 04:00:06 crc kubenswrapper[4667]: I0131 04:00:06.027267 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/12187e5c-4ff4-4ab3-baea-3501646a5c68-util\") pod \"12187e5c-4ff4-4ab3-baea-3501646a5c68\" (UID: \"12187e5c-4ff4-4ab3-baea-3501646a5c68\") " Jan 31 04:00:06 crc kubenswrapper[4667]: I0131 04:00:06.032375 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12187e5c-4ff4-4ab3-baea-3501646a5c68-bundle" (OuterVolumeSpecName: "bundle") pod "12187e5c-4ff4-4ab3-baea-3501646a5c68" (UID: "12187e5c-4ff4-4ab3-baea-3501646a5c68"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:00:06 crc kubenswrapper[4667]: I0131 04:00:06.035432 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12187e5c-4ff4-4ab3-baea-3501646a5c68-kube-api-access-zkbvd" (OuterVolumeSpecName: "kube-api-access-zkbvd") pod "12187e5c-4ff4-4ab3-baea-3501646a5c68" (UID: "12187e5c-4ff4-4ab3-baea-3501646a5c68"). InnerVolumeSpecName "kube-api-access-zkbvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:00:06 crc kubenswrapper[4667]: I0131 04:00:06.038232 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12187e5c-4ff4-4ab3-baea-3501646a5c68-util" (OuterVolumeSpecName: "util") pod "12187e5c-4ff4-4ab3-baea-3501646a5c68" (UID: "12187e5c-4ff4-4ab3-baea-3501646a5c68"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:00:06 crc kubenswrapper[4667]: I0131 04:00:06.128986 4667 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/12187e5c-4ff4-4ab3-baea-3501646a5c68-util\") on node \"crc\" DevicePath \"\"" Jan 31 04:00:06 crc kubenswrapper[4667]: I0131 04:00:06.129041 4667 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/12187e5c-4ff4-4ab3-baea-3501646a5c68-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:00:06 crc kubenswrapper[4667]: I0131 04:00:06.129056 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkbvd\" (UniqueName: \"kubernetes.io/projected/12187e5c-4ff4-4ab3-baea-3501646a5c68-kube-api-access-zkbvd\") on node \"crc\" DevicePath \"\"" Jan 31 04:00:06 crc kubenswrapper[4667]: I0131 04:00:06.635648 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz" event={"ID":"12187e5c-4ff4-4ab3-baea-3501646a5c68","Type":"ContainerDied","Data":"ea3d3ba91a83fb042b4a5d0c9b89100f27af8fd377b4f5c4e37741ed0c058cfb"} Jan 31 04:00:06 crc kubenswrapper[4667]: I0131 04:00:06.635741 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea3d3ba91a83fb042b4a5d0c9b89100f27af8fd377b4f5c4e37741ed0c058cfb" Jan 31 04:00:06 crc kubenswrapper[4667]: I0131 04:00:06.636053 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz" Jan 31 04:00:07 crc kubenswrapper[4667]: I0131 04:00:07.696372 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-cjp5c"] Jan 31 04:00:07 crc kubenswrapper[4667]: E0131 04:00:07.701147 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="704ddfdd-061e-4dff-a878-3c0755c07a6d" containerName="collect-profiles" Jan 31 04:00:07 crc kubenswrapper[4667]: I0131 04:00:07.701392 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="704ddfdd-061e-4dff-a878-3c0755c07a6d" containerName="collect-profiles" Jan 31 04:00:07 crc kubenswrapper[4667]: E0131 04:00:07.701476 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12187e5c-4ff4-4ab3-baea-3501646a5c68" containerName="pull" Jan 31 04:00:07 crc kubenswrapper[4667]: I0131 04:00:07.701527 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="12187e5c-4ff4-4ab3-baea-3501646a5c68" containerName="pull" Jan 31 04:00:07 crc kubenswrapper[4667]: E0131 04:00:07.701578 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12187e5c-4ff4-4ab3-baea-3501646a5c68" containerName="util" Jan 31 04:00:07 crc kubenswrapper[4667]: I0131 04:00:07.701642 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="12187e5c-4ff4-4ab3-baea-3501646a5c68" containerName="util" Jan 31 04:00:07 crc kubenswrapper[4667]: E0131 04:00:07.701692 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12187e5c-4ff4-4ab3-baea-3501646a5c68" containerName="extract" Jan 31 04:00:07 crc kubenswrapper[4667]: I0131 04:00:07.701904 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="12187e5c-4ff4-4ab3-baea-3501646a5c68" containerName="extract" Jan 31 04:00:07 crc kubenswrapper[4667]: I0131 04:00:07.702106 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="704ddfdd-061e-4dff-a878-3c0755c07a6d" containerName="collect-profiles" Jan 31 04:00:07 crc kubenswrapper[4667]: I0131 04:00:07.702304 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="12187e5c-4ff4-4ab3-baea-3501646a5c68" containerName="extract" Jan 31 04:00:07 crc kubenswrapper[4667]: I0131 04:00:07.702794 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-cjp5c" Jan 31 04:00:07 crc kubenswrapper[4667]: I0131 04:00:07.707049 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-6dmck" Jan 31 04:00:07 crc kubenswrapper[4667]: I0131 04:00:07.707348 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 31 04:00:07 crc kubenswrapper[4667]: I0131 04:00:07.708005 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 31 04:00:07 crc kubenswrapper[4667]: I0131 04:00:07.708711 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-cjp5c"] Jan 31 04:00:07 crc kubenswrapper[4667]: I0131 04:00:07.856640 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnldp\" (UniqueName: \"kubernetes.io/projected/d4bb0958-e09f-488c-9d40-747ddd8ed31a-kube-api-access-pnldp\") pod \"nmstate-operator-646758c888-cjp5c\" (UID: \"d4bb0958-e09f-488c-9d40-747ddd8ed31a\") " pod="openshift-nmstate/nmstate-operator-646758c888-cjp5c" Jan 31 04:00:07 crc kubenswrapper[4667]: I0131 04:00:07.958572 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnldp\" (UniqueName: \"kubernetes.io/projected/d4bb0958-e09f-488c-9d40-747ddd8ed31a-kube-api-access-pnldp\") pod \"nmstate-operator-646758c888-cjp5c\" (UID: \"d4bb0958-e09f-488c-9d40-747ddd8ed31a\") " pod="openshift-nmstate/nmstate-operator-646758c888-cjp5c" Jan 31 04:00:07 crc kubenswrapper[4667]: I0131 04:00:07.983685 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnldp\" (UniqueName: \"kubernetes.io/projected/d4bb0958-e09f-488c-9d40-747ddd8ed31a-kube-api-access-pnldp\") pod \"nmstate-operator-646758c888-cjp5c\" (UID: \"d4bb0958-e09f-488c-9d40-747ddd8ed31a\") " pod="openshift-nmstate/nmstate-operator-646758c888-cjp5c" Jan 31 04:00:08 crc kubenswrapper[4667]: I0131 04:00:08.058087 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-cjp5c" Jan 31 04:00:08 crc kubenswrapper[4667]: I0131 04:00:08.302087 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-cjp5c"] Jan 31 04:00:08 crc kubenswrapper[4667]: I0131 04:00:08.648660 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-cjp5c" event={"ID":"d4bb0958-e09f-488c-9d40-747ddd8ed31a","Type":"ContainerStarted","Data":"6cb25dc91a3680eff52878578ebc06e8e787c588217973b7f17875a38c325f35"} Jan 31 04:00:11 crc kubenswrapper[4667]: I0131 04:00:11.671535 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-cjp5c" event={"ID":"d4bb0958-e09f-488c-9d40-747ddd8ed31a","Type":"ContainerStarted","Data":"a27c43acb4ec4b22eade5203b1fab7d1ad28ddf0f1c95d8f8eff994f2b33bf51"} Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.074753 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-cjp5c" podStartSLOduration=12.487742266 podStartE2EDuration="15.074731272s" podCreationTimestamp="2026-01-31 04:00:07 +0000 UTC" firstStartedPulling="2026-01-31 04:00:08.329902506 +0000 UTC m=+731.846237805" lastFinishedPulling="2026-01-31 04:00:10.916891512 +0000 UTC m=+734.433226811" observedRunningTime="2026-01-31 04:00:11.698690263 +0000 UTC m=+735.215025602" watchObservedRunningTime="2026-01-31 04:00:22.074731272 +0000 UTC m=+745.591066571" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.077891 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-4fm4q"] Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.078858 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-4fm4q" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.083697 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-8sx29" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.090854 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-4fm4q"] Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.101408 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-7nqzs"] Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.102248 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7nqzs" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.106865 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.134404 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-lcflt"] Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.139136 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-lcflt" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.152585 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-7nqzs"] Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.201030 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/5713803d-a7eb-4197-bed0-8cfd7112add6-ovs-socket\") pod \"nmstate-handler-lcflt\" (UID: \"5713803d-a7eb-4197-bed0-8cfd7112add6\") " pod="openshift-nmstate/nmstate-handler-lcflt" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.201116 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/5713803d-a7eb-4197-bed0-8cfd7112add6-nmstate-lock\") pod \"nmstate-handler-lcflt\" (UID: \"5713803d-a7eb-4197-bed0-8cfd7112add6\") " pod="openshift-nmstate/nmstate-handler-lcflt" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.201171 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmxpk\" (UniqueName: \"kubernetes.io/projected/454649dc-76dc-45ea-8395-90c8e06d3e2f-kube-api-access-pmxpk\") pod \"nmstate-webhook-8474b5b9d8-7nqzs\" (UID: \"454649dc-76dc-45ea-8395-90c8e06d3e2f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7nqzs" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.201207 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/454649dc-76dc-45ea-8395-90c8e06d3e2f-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-7nqzs\" (UID: \"454649dc-76dc-45ea-8395-90c8e06d3e2f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7nqzs" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.201232 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5958v\" (UniqueName: \"kubernetes.io/projected/848d059a-1bd2-4bec-ae9b-36352c162923-kube-api-access-5958v\") pod \"nmstate-metrics-54757c584b-4fm4q\" (UID: \"848d059a-1bd2-4bec-ae9b-36352c162923\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-4fm4q" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.201284 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxgzd\" (UniqueName: \"kubernetes.io/projected/5713803d-a7eb-4197-bed0-8cfd7112add6-kube-api-access-kxgzd\") pod \"nmstate-handler-lcflt\" (UID: \"5713803d-a7eb-4197-bed0-8cfd7112add6\") " pod="openshift-nmstate/nmstate-handler-lcflt" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.201315 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/5713803d-a7eb-4197-bed0-8cfd7112add6-dbus-socket\") pod \"nmstate-handler-lcflt\" (UID: \"5713803d-a7eb-4197-bed0-8cfd7112add6\") " pod="openshift-nmstate/nmstate-handler-lcflt" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.288805 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qjzd"] Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.289980 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qjzd" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.293092 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.299446 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.304096 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-hqbs4" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.304762 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/454649dc-76dc-45ea-8395-90c8e06d3e2f-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-7nqzs\" (UID: \"454649dc-76dc-45ea-8395-90c8e06d3e2f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7nqzs" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.304828 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5958v\" (UniqueName: \"kubernetes.io/projected/848d059a-1bd2-4bec-ae9b-36352c162923-kube-api-access-5958v\") pod \"nmstate-metrics-54757c584b-4fm4q\" (UID: \"848d059a-1bd2-4bec-ae9b-36352c162923\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-4fm4q" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.304929 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxgzd\" (UniqueName: \"kubernetes.io/projected/5713803d-a7eb-4197-bed0-8cfd7112add6-kube-api-access-kxgzd\") pod \"nmstate-handler-lcflt\" (UID: \"5713803d-a7eb-4197-bed0-8cfd7112add6\") " pod="openshift-nmstate/nmstate-handler-lcflt" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.304955 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/5713803d-a7eb-4197-bed0-8cfd7112add6-dbus-socket\") pod \"nmstate-handler-lcflt\" (UID: \"5713803d-a7eb-4197-bed0-8cfd7112add6\") " pod="openshift-nmstate/nmstate-handler-lcflt" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.304983 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/5713803d-a7eb-4197-bed0-8cfd7112add6-ovs-socket\") pod \"nmstate-handler-lcflt\" (UID: \"5713803d-a7eb-4197-bed0-8cfd7112add6\") " pod="openshift-nmstate/nmstate-handler-lcflt" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.305010 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/5713803d-a7eb-4197-bed0-8cfd7112add6-nmstate-lock\") pod \"nmstate-handler-lcflt\" (UID: \"5713803d-a7eb-4197-bed0-8cfd7112add6\") " pod="openshift-nmstate/nmstate-handler-lcflt" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.305036 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmxpk\" (UniqueName: \"kubernetes.io/projected/454649dc-76dc-45ea-8395-90c8e06d3e2f-kube-api-access-pmxpk\") pod \"nmstate-webhook-8474b5b9d8-7nqzs\" (UID: \"454649dc-76dc-45ea-8395-90c8e06d3e2f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7nqzs" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.305971 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/5713803d-a7eb-4197-bed0-8cfd7112add6-dbus-socket\") pod \"nmstate-handler-lcflt\" (UID: \"5713803d-a7eb-4197-bed0-8cfd7112add6\") " pod="openshift-nmstate/nmstate-handler-lcflt" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.306014 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/5713803d-a7eb-4197-bed0-8cfd7112add6-ovs-socket\") pod \"nmstate-handler-lcflt\" (UID: \"5713803d-a7eb-4197-bed0-8cfd7112add6\") " pod="openshift-nmstate/nmstate-handler-lcflt" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.306035 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/5713803d-a7eb-4197-bed0-8cfd7112add6-nmstate-lock\") pod \"nmstate-handler-lcflt\" (UID: \"5713803d-a7eb-4197-bed0-8cfd7112add6\") " pod="openshift-nmstate/nmstate-handler-lcflt" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.309281 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qjzd"] Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.333934 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5958v\" (UniqueName: \"kubernetes.io/projected/848d059a-1bd2-4bec-ae9b-36352c162923-kube-api-access-5958v\") pod \"nmstate-metrics-54757c584b-4fm4q\" (UID: \"848d059a-1bd2-4bec-ae9b-36352c162923\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-4fm4q" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.333930 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/454649dc-76dc-45ea-8395-90c8e06d3e2f-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-7nqzs\" (UID: \"454649dc-76dc-45ea-8395-90c8e06d3e2f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7nqzs" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.333934 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmxpk\" (UniqueName: \"kubernetes.io/projected/454649dc-76dc-45ea-8395-90c8e06d3e2f-kube-api-access-pmxpk\") pod \"nmstate-webhook-8474b5b9d8-7nqzs\" (UID: \"454649dc-76dc-45ea-8395-90c8e06d3e2f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7nqzs" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.338775 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxgzd\" (UniqueName: \"kubernetes.io/projected/5713803d-a7eb-4197-bed0-8cfd7112add6-kube-api-access-kxgzd\") pod \"nmstate-handler-lcflt\" (UID: \"5713803d-a7eb-4197-bed0-8cfd7112add6\") " pod="openshift-nmstate/nmstate-handler-lcflt" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.397640 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-4fm4q" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.406877 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/20571b84-83e2-494c-b690-9d7005ef51eb-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-7qjzd\" (UID: \"20571b84-83e2-494c-b690-9d7005ef51eb\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qjzd" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.407158 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb8jw\" (UniqueName: \"kubernetes.io/projected/20571b84-83e2-494c-b690-9d7005ef51eb-kube-api-access-gb8jw\") pod \"nmstate-console-plugin-7754f76f8b-7qjzd\" (UID: \"20571b84-83e2-494c-b690-9d7005ef51eb\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qjzd" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.407235 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/20571b84-83e2-494c-b690-9d7005ef51eb-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-7qjzd\" (UID: \"20571b84-83e2-494c-b690-9d7005ef51eb\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qjzd" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.421339 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7nqzs" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.462734 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-lcflt" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.498008 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6ffb9bff49-g2fl9"] Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.498812 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6ffb9bff49-g2fl9" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.515648 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/20571b84-83e2-494c-b690-9d7005ef51eb-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-7qjzd\" (UID: \"20571b84-83e2-494c-b690-9d7005ef51eb\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qjzd" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.515697 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb8jw\" (UniqueName: \"kubernetes.io/projected/20571b84-83e2-494c-b690-9d7005ef51eb-kube-api-access-gb8jw\") pod \"nmstate-console-plugin-7754f76f8b-7qjzd\" (UID: \"20571b84-83e2-494c-b690-9d7005ef51eb\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qjzd" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.515723 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/20571b84-83e2-494c-b690-9d7005ef51eb-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-7qjzd\" (UID: \"20571b84-83e2-494c-b690-9d7005ef51eb\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qjzd" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.516697 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/20571b84-83e2-494c-b690-9d7005ef51eb-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-7qjzd\" (UID: \"20571b84-83e2-494c-b690-9d7005ef51eb\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qjzd" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.522593 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/20571b84-83e2-494c-b690-9d7005ef51eb-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-7qjzd\" (UID: \"20571b84-83e2-494c-b690-9d7005ef51eb\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qjzd" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.545720 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6ffb9bff49-g2fl9"] Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.556340 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb8jw\" (UniqueName: \"kubernetes.io/projected/20571b84-83e2-494c-b690-9d7005ef51eb-kube-api-access-gb8jw\") pod \"nmstate-console-plugin-7754f76f8b-7qjzd\" (UID: \"20571b84-83e2-494c-b690-9d7005ef51eb\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qjzd" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.617139 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/04386fea-754d-439b-817a-d02e24a798b3-console-serving-cert\") pod \"console-6ffb9bff49-g2fl9\" (UID: \"04386fea-754d-439b-817a-d02e24a798b3\") " pod="openshift-console/console-6ffb9bff49-g2fl9" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.617212 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04386fea-754d-439b-817a-d02e24a798b3-trusted-ca-bundle\") pod \"console-6ffb9bff49-g2fl9\" (UID: \"04386fea-754d-439b-817a-d02e24a798b3\") " pod="openshift-console/console-6ffb9bff49-g2fl9" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.617250 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/04386fea-754d-439b-817a-d02e24a798b3-service-ca\") pod \"console-6ffb9bff49-g2fl9\" (UID: \"04386fea-754d-439b-817a-d02e24a798b3\") " pod="openshift-console/console-6ffb9bff49-g2fl9" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.617281 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/04386fea-754d-439b-817a-d02e24a798b3-console-oauth-config\") pod \"console-6ffb9bff49-g2fl9\" (UID: \"04386fea-754d-439b-817a-d02e24a798b3\") " pod="openshift-console/console-6ffb9bff49-g2fl9" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.617319 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/04386fea-754d-439b-817a-d02e24a798b3-console-config\") pod \"console-6ffb9bff49-g2fl9\" (UID: \"04386fea-754d-439b-817a-d02e24a798b3\") " pod="openshift-console/console-6ffb9bff49-g2fl9" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.618554 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/04386fea-754d-439b-817a-d02e24a798b3-oauth-serving-cert\") pod \"console-6ffb9bff49-g2fl9\" (UID: \"04386fea-754d-439b-817a-d02e24a798b3\") " pod="openshift-console/console-6ffb9bff49-g2fl9" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.618678 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpz5t\" (UniqueName: \"kubernetes.io/projected/04386fea-754d-439b-817a-d02e24a798b3-kube-api-access-tpz5t\") pod \"console-6ffb9bff49-g2fl9\" (UID: \"04386fea-754d-439b-817a-d02e24a798b3\") " pod="openshift-console/console-6ffb9bff49-g2fl9" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.620716 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qjzd" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.725495 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/04386fea-754d-439b-817a-d02e24a798b3-console-serving-cert\") pod \"console-6ffb9bff49-g2fl9\" (UID: \"04386fea-754d-439b-817a-d02e24a798b3\") " pod="openshift-console/console-6ffb9bff49-g2fl9" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.725564 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04386fea-754d-439b-817a-d02e24a798b3-trusted-ca-bundle\") pod \"console-6ffb9bff49-g2fl9\" (UID: \"04386fea-754d-439b-817a-d02e24a798b3\") " pod="openshift-console/console-6ffb9bff49-g2fl9" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.725598 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/04386fea-754d-439b-817a-d02e24a798b3-service-ca\") pod \"console-6ffb9bff49-g2fl9\" (UID: \"04386fea-754d-439b-817a-d02e24a798b3\") " pod="openshift-console/console-6ffb9bff49-g2fl9" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.725630 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/04386fea-754d-439b-817a-d02e24a798b3-console-oauth-config\") pod \"console-6ffb9bff49-g2fl9\" (UID: \"04386fea-754d-439b-817a-d02e24a798b3\") " pod="openshift-console/console-6ffb9bff49-g2fl9" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.725684 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/04386fea-754d-439b-817a-d02e24a798b3-console-config\") pod \"console-6ffb9bff49-g2fl9\" (UID: \"04386fea-754d-439b-817a-d02e24a798b3\") " pod="openshift-console/console-6ffb9bff49-g2fl9" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.725712 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/04386fea-754d-439b-817a-d02e24a798b3-oauth-serving-cert\") pod \"console-6ffb9bff49-g2fl9\" (UID: \"04386fea-754d-439b-817a-d02e24a798b3\") " pod="openshift-console/console-6ffb9bff49-g2fl9" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.725749 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpz5t\" (UniqueName: \"kubernetes.io/projected/04386fea-754d-439b-817a-d02e24a798b3-kube-api-access-tpz5t\") pod \"console-6ffb9bff49-g2fl9\" (UID: \"04386fea-754d-439b-817a-d02e24a798b3\") " pod="openshift-console/console-6ffb9bff49-g2fl9" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.727746 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/04386fea-754d-439b-817a-d02e24a798b3-service-ca\") pod \"console-6ffb9bff49-g2fl9\" (UID: \"04386fea-754d-439b-817a-d02e24a798b3\") " pod="openshift-console/console-6ffb9bff49-g2fl9" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.729884 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/04386fea-754d-439b-817a-d02e24a798b3-oauth-serving-cert\") pod \"console-6ffb9bff49-g2fl9\" (UID: \"04386fea-754d-439b-817a-d02e24a798b3\") " pod="openshift-console/console-6ffb9bff49-g2fl9" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.731887 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/04386fea-754d-439b-817a-d02e24a798b3-console-config\") pod \"console-6ffb9bff49-g2fl9\" (UID: \"04386fea-754d-439b-817a-d02e24a798b3\") " pod="openshift-console/console-6ffb9bff49-g2fl9" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.734464 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04386fea-754d-439b-817a-d02e24a798b3-trusted-ca-bundle\") pod \"console-6ffb9bff49-g2fl9\" (UID: \"04386fea-754d-439b-817a-d02e24a798b3\") " pod="openshift-console/console-6ffb9bff49-g2fl9" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.743194 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/04386fea-754d-439b-817a-d02e24a798b3-console-serving-cert\") pod \"console-6ffb9bff49-g2fl9\" (UID: \"04386fea-754d-439b-817a-d02e24a798b3\") " pod="openshift-console/console-6ffb9bff49-g2fl9" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.757378 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpz5t\" (UniqueName: \"kubernetes.io/projected/04386fea-754d-439b-817a-d02e24a798b3-kube-api-access-tpz5t\") pod \"console-6ffb9bff49-g2fl9\" (UID: \"04386fea-754d-439b-817a-d02e24a798b3\") " pod="openshift-console/console-6ffb9bff49-g2fl9" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.765683 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-lcflt" event={"ID":"5713803d-a7eb-4197-bed0-8cfd7112add6","Type":"ContainerStarted","Data":"03b6deeca29c80ff817f63c3da4c5dbd28e1bbef15453ec547d17eb1bc1457ee"} Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.774737 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/04386fea-754d-439b-817a-d02e24a798b3-console-oauth-config\") pod \"console-6ffb9bff49-g2fl9\" (UID: \"04386fea-754d-439b-817a-d02e24a798b3\") " pod="openshift-console/console-6ffb9bff49-g2fl9" Jan 31 04:00:22 crc kubenswrapper[4667]: W0131 04:00:22.806186 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod848d059a_1bd2_4bec_ae9b_36352c162923.slice/crio-2dc6eb7c6a1e4096fce759a4c7bea30b89f615b5b0cc73b97d4819038de90f07 WatchSource:0}: Error finding container 2dc6eb7c6a1e4096fce759a4c7bea30b89f615b5b0cc73b97d4819038de90f07: Status 404 returned error can't find the container with id 2dc6eb7c6a1e4096fce759a4c7bea30b89f615b5b0cc73b97d4819038de90f07 Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.808698 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-4fm4q"] Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.852830 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6ffb9bff49-g2fl9" Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.873441 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qjzd"] Jan 31 04:00:22 crc kubenswrapper[4667]: I0131 04:00:22.940489 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-7nqzs"] Jan 31 04:00:23 crc kubenswrapper[4667]: I0131 04:00:23.071400 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6ffb9bff49-g2fl9"] Jan 31 04:00:23 crc kubenswrapper[4667]: W0131 04:00:23.073963 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04386fea_754d_439b_817a_d02e24a798b3.slice/crio-a3b4d3bde6b9ece6ff9e8f11bf8897b535343bd930fcdf2e04e6c3f5be59eafc WatchSource:0}: Error finding container a3b4d3bde6b9ece6ff9e8f11bf8897b535343bd930fcdf2e04e6c3f5be59eafc: Status 404 returned error can't find the container with id a3b4d3bde6b9ece6ff9e8f11bf8897b535343bd930fcdf2e04e6c3f5be59eafc Jan 31 04:00:23 crc kubenswrapper[4667]: I0131 04:00:23.776736 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7nqzs" event={"ID":"454649dc-76dc-45ea-8395-90c8e06d3e2f","Type":"ContainerStarted","Data":"a622c3422b9007224df5dff3ba6cde9ff3a236af314cd4ac048003410b372d5d"} Jan 31 04:00:23 crc kubenswrapper[4667]: I0131 04:00:23.781237 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qjzd" event={"ID":"20571b84-83e2-494c-b690-9d7005ef51eb","Type":"ContainerStarted","Data":"e99c3d4d5bba61bad940baf2cc512c778106f5b9a1502621b63ed9b880fdf264"} Jan 31 04:00:23 crc kubenswrapper[4667]: I0131 04:00:23.784032 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6ffb9bff49-g2fl9" event={"ID":"04386fea-754d-439b-817a-d02e24a798b3","Type":"ContainerStarted","Data":"164a43a4b4cad7afc63889e749f6250ef97bfef91890d1142a28f6c52a8f31e5"} Jan 31 04:00:23 crc kubenswrapper[4667]: I0131 04:00:23.784087 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6ffb9bff49-g2fl9" event={"ID":"04386fea-754d-439b-817a-d02e24a798b3","Type":"ContainerStarted","Data":"a3b4d3bde6b9ece6ff9e8f11bf8897b535343bd930fcdf2e04e6c3f5be59eafc"} Jan 31 04:00:23 crc kubenswrapper[4667]: I0131 04:00:23.787057 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-4fm4q" event={"ID":"848d059a-1bd2-4bec-ae9b-36352c162923","Type":"ContainerStarted","Data":"2dc6eb7c6a1e4096fce759a4c7bea30b89f615b5b0cc73b97d4819038de90f07"} Jan 31 04:00:23 crc kubenswrapper[4667]: I0131 04:00:23.818114 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6ffb9bff49-g2fl9" podStartSLOduration=1.818085271 podStartE2EDuration="1.818085271s" podCreationTimestamp="2026-01-31 04:00:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:00:23.814442805 +0000 UTC m=+747.330778104" watchObservedRunningTime="2026-01-31 04:00:23.818085271 +0000 UTC m=+747.334420570" Jan 31 04:00:26 crc kubenswrapper[4667]: I0131 04:00:26.813131 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-4fm4q" event={"ID":"848d059a-1bd2-4bec-ae9b-36352c162923","Type":"ContainerStarted","Data":"f905713531df6190293270b0acd5b80774f76107b1591759a06395d60b19f96e"} Jan 31 04:00:26 crc kubenswrapper[4667]: I0131 04:00:26.814472 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-lcflt" event={"ID":"5713803d-a7eb-4197-bed0-8cfd7112add6","Type":"ContainerStarted","Data":"8de88e6557cf769197b7d7b990e61767831ee79ac179e38a6ee0aab129aea619"} Jan 31 04:00:26 crc kubenswrapper[4667]: I0131 04:00:26.814634 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-lcflt" Jan 31 04:00:26 crc kubenswrapper[4667]: I0131 04:00:26.817020 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7nqzs" event={"ID":"454649dc-76dc-45ea-8395-90c8e06d3e2f","Type":"ContainerStarted","Data":"c21fc1ada659fde2902b1a8a6898c836cfd4c4294813079bd3ced9af364b2594"} Jan 31 04:00:26 crc kubenswrapper[4667]: I0131 04:00:26.817179 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7nqzs" Jan 31 04:00:26 crc kubenswrapper[4667]: I0131 04:00:26.819311 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qjzd" event={"ID":"20571b84-83e2-494c-b690-9d7005ef51eb","Type":"ContainerStarted","Data":"10dc2a47e46ba617c6b76f9b118fb0a1c88e047d85472d45b169b48b92b36629"} Jan 31 04:00:26 crc kubenswrapper[4667]: I0131 04:00:26.835162 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-lcflt" podStartSLOduration=1.34119066 podStartE2EDuration="4.835140899s" podCreationTimestamp="2026-01-31 04:00:22 +0000 UTC" firstStartedPulling="2026-01-31 04:00:22.553342729 +0000 UTC m=+746.069678028" lastFinishedPulling="2026-01-31 04:00:26.047292958 +0000 UTC m=+749.563628267" observedRunningTime="2026-01-31 04:00:26.833470735 +0000 UTC m=+750.349806034" watchObservedRunningTime="2026-01-31 04:00:26.835140899 +0000 UTC m=+750.351476198" Jan 31 04:00:26 crc kubenswrapper[4667]: I0131 04:00:26.906429 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7nqzs" podStartSLOduration=1.809258187 podStartE2EDuration="4.906405215s" podCreationTimestamp="2026-01-31 04:00:22 +0000 UTC" firstStartedPulling="2026-01-31 04:00:22.957388252 +0000 UTC m=+746.473723551" lastFinishedPulling="2026-01-31 04:00:26.05453528 +0000 UTC m=+749.570870579" observedRunningTime="2026-01-31 04:00:26.856222957 +0000 UTC m=+750.372558276" watchObservedRunningTime="2026-01-31 04:00:26.906405215 +0000 UTC m=+750.422740514" Jan 31 04:00:26 crc kubenswrapper[4667]: I0131 04:00:26.915804 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7qjzd" podStartSLOduration=1.767644716 podStartE2EDuration="4.915789304s" podCreationTimestamp="2026-01-31 04:00:22 +0000 UTC" firstStartedPulling="2026-01-31 04:00:22.896220213 +0000 UTC m=+746.412555512" lastFinishedPulling="2026-01-31 04:00:26.044364771 +0000 UTC m=+749.560700100" observedRunningTime="2026-01-31 04:00:26.893025231 +0000 UTC m=+750.409360530" watchObservedRunningTime="2026-01-31 04:00:26.915789304 +0000 UTC m=+750.432124603" Jan 31 04:00:29 crc kubenswrapper[4667]: I0131 04:00:29.845043 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-4fm4q" event={"ID":"848d059a-1bd2-4bec-ae9b-36352c162923","Type":"ContainerStarted","Data":"0cc32c66ae6e0f47afbc109cd3f466c2c5bd281c3b2d0522a69c1e789f4d806c"} Jan 31 04:00:29 crc kubenswrapper[4667]: I0131 04:00:29.875195 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-4fm4q" podStartSLOduration=1.9439709729999999 podStartE2EDuration="7.875174446s" podCreationTimestamp="2026-01-31 04:00:22 +0000 UTC" firstStartedPulling="2026-01-31 04:00:22.810393872 +0000 UTC m=+746.326729171" lastFinishedPulling="2026-01-31 04:00:28.741597335 +0000 UTC m=+752.257932644" observedRunningTime="2026-01-31 04:00:29.869560237 +0000 UTC m=+753.385895576" watchObservedRunningTime="2026-01-31 04:00:29.875174446 +0000 UTC m=+753.391509745" Jan 31 04:00:32 crc kubenswrapper[4667]: I0131 04:00:32.496081 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-lcflt" Jan 31 04:00:32 crc kubenswrapper[4667]: I0131 04:00:32.853857 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6ffb9bff49-g2fl9" Jan 31 04:00:32 crc kubenswrapper[4667]: I0131 04:00:32.853920 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6ffb9bff49-g2fl9" Jan 31 04:00:32 crc kubenswrapper[4667]: I0131 04:00:32.861164 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6ffb9bff49-g2fl9" Jan 31 04:00:32 crc kubenswrapper[4667]: I0131 04:00:32.870198 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6ffb9bff49-g2fl9" Jan 31 04:00:32 crc kubenswrapper[4667]: I0131 04:00:32.959959 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-wjsth"] Jan 31 04:00:42 crc kubenswrapper[4667]: I0131 04:00:42.429089 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7nqzs" Jan 31 04:00:43 crc kubenswrapper[4667]: I0131 04:00:43.787155 4667 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 31 04:00:56 crc kubenswrapper[4667]: I0131 04:00:56.475299 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc99ffw"] Jan 31 04:00:56 crc kubenswrapper[4667]: I0131 04:00:56.477460 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc99ffw" Jan 31 04:00:56 crc kubenswrapper[4667]: I0131 04:00:56.481197 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 31 04:00:56 crc kubenswrapper[4667]: I0131 04:00:56.494131 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc99ffw"] Jan 31 04:00:56 crc kubenswrapper[4667]: I0131 04:00:56.543375 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/65537ed9-39d5-40b0-82c9-a4b3d9dc6551-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc99ffw\" (UID: \"65537ed9-39d5-40b0-82c9-a4b3d9dc6551\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc99ffw" Jan 31 04:00:56 crc kubenswrapper[4667]: I0131 04:00:56.543442 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/65537ed9-39d5-40b0-82c9-a4b3d9dc6551-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc99ffw\" (UID: \"65537ed9-39d5-40b0-82c9-a4b3d9dc6551\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc99ffw" Jan 31 04:00:56 crc kubenswrapper[4667]: I0131 04:00:56.543501 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh8tk\" (UniqueName: \"kubernetes.io/projected/65537ed9-39d5-40b0-82c9-a4b3d9dc6551-kube-api-access-dh8tk\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc99ffw\" (UID: \"65537ed9-39d5-40b0-82c9-a4b3d9dc6551\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc99ffw" Jan 31 04:00:56 crc kubenswrapper[4667]: I0131 04:00:56.645575 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh8tk\" (UniqueName: \"kubernetes.io/projected/65537ed9-39d5-40b0-82c9-a4b3d9dc6551-kube-api-access-dh8tk\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc99ffw\" (UID: \"65537ed9-39d5-40b0-82c9-a4b3d9dc6551\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc99ffw" Jan 31 04:00:56 crc kubenswrapper[4667]: I0131 04:00:56.645709 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/65537ed9-39d5-40b0-82c9-a4b3d9dc6551-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc99ffw\" (UID: \"65537ed9-39d5-40b0-82c9-a4b3d9dc6551\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc99ffw" Jan 31 04:00:56 crc kubenswrapper[4667]: I0131 04:00:56.645743 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/65537ed9-39d5-40b0-82c9-a4b3d9dc6551-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc99ffw\" (UID: \"65537ed9-39d5-40b0-82c9-a4b3d9dc6551\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc99ffw" Jan 31 04:00:56 crc kubenswrapper[4667]: I0131 04:00:56.646370 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/65537ed9-39d5-40b0-82c9-a4b3d9dc6551-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc99ffw\" (UID: \"65537ed9-39d5-40b0-82c9-a4b3d9dc6551\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc99ffw" Jan 31 04:00:56 crc kubenswrapper[4667]: I0131 04:00:56.646546 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/65537ed9-39d5-40b0-82c9-a4b3d9dc6551-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc99ffw\" (UID: \"65537ed9-39d5-40b0-82c9-a4b3d9dc6551\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc99ffw" Jan 31 04:00:56 crc kubenswrapper[4667]: I0131 04:00:56.677433 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh8tk\" (UniqueName: \"kubernetes.io/projected/65537ed9-39d5-40b0-82c9-a4b3d9dc6551-kube-api-access-dh8tk\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc99ffw\" (UID: \"65537ed9-39d5-40b0-82c9-a4b3d9dc6551\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc99ffw" Jan 31 04:00:56 crc kubenswrapper[4667]: I0131 04:00:56.799096 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc99ffw" Jan 31 04:00:57 crc kubenswrapper[4667]: W0131 04:00:57.071329 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65537ed9_39d5_40b0_82c9_a4b3d9dc6551.slice/crio-52dbc493c1a534009d7e3a2f50f01d5058a3d072790428d3d98f2122d3cc95d1 WatchSource:0}: Error finding container 52dbc493c1a534009d7e3a2f50f01d5058a3d072790428d3d98f2122d3cc95d1: Status 404 returned error can't find the container with id 52dbc493c1a534009d7e3a2f50f01d5058a3d072790428d3d98f2122d3cc95d1 Jan 31 04:00:57 crc kubenswrapper[4667]: I0131 04:00:57.072363 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc99ffw"] Jan 31 04:00:58 crc kubenswrapper[4667]: I0131 04:00:58.322395 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-wjsth" podUID="4f281370-6419-4dfb-b21f-9d1c9c7eddaa" containerName="console" containerID="cri-o://8e0b5bceb97157464b0e39472d0fb5e7c96918020db6f8b97a4b753317739cd6" gracePeriod=15 Jan 31 04:00:58 crc kubenswrapper[4667]: I0131 04:00:58.345527 4667 generic.go:334] "Generic (PLEG): container finished" podID="65537ed9-39d5-40b0-82c9-a4b3d9dc6551" containerID="69cd0c85bebde00b1dbfaca6482b5744cbf6521abc1ca70d8c0b408bf0efb6ed" exitCode=0 Jan 31 04:00:58 crc kubenswrapper[4667]: I0131 04:00:58.345608 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc99ffw" event={"ID":"65537ed9-39d5-40b0-82c9-a4b3d9dc6551","Type":"ContainerDied","Data":"69cd0c85bebde00b1dbfaca6482b5744cbf6521abc1ca70d8c0b408bf0efb6ed"} Jan 31 04:00:58 crc kubenswrapper[4667]: I0131 04:00:58.345660 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc99ffw" event={"ID":"65537ed9-39d5-40b0-82c9-a4b3d9dc6551","Type":"ContainerStarted","Data":"52dbc493c1a534009d7e3a2f50f01d5058a3d072790428d3d98f2122d3cc95d1"} Jan 31 04:00:58 crc kubenswrapper[4667]: E0131 04:00:58.562155 4667 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f281370_6419_4dfb_b21f_9d1c9c7eddaa.slice/crio-conmon-8e0b5bceb97157464b0e39472d0fb5e7c96918020db6f8b97a4b753317739cd6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f281370_6419_4dfb_b21f_9d1c9c7eddaa.slice/crio-8e0b5bceb97157464b0e39472d0fb5e7c96918020db6f8b97a4b753317739cd6.scope\": RecentStats: unable to find data in memory cache]" Jan 31 04:00:58 crc kubenswrapper[4667]: I0131 04:00:58.725952 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gxnf2"] Jan 31 04:00:58 crc kubenswrapper[4667]: I0131 04:00:58.727449 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gxnf2" Jan 31 04:00:58 crc kubenswrapper[4667]: I0131 04:00:58.737415 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c56e881-0420-4250-98d6-896940599521-utilities\") pod \"redhat-operators-gxnf2\" (UID: \"3c56e881-0420-4250-98d6-896940599521\") " pod="openshift-marketplace/redhat-operators-gxnf2" Jan 31 04:00:58 crc kubenswrapper[4667]: I0131 04:00:58.737517 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c56e881-0420-4250-98d6-896940599521-catalog-content\") pod \"redhat-operators-gxnf2\" (UID: \"3c56e881-0420-4250-98d6-896940599521\") " pod="openshift-marketplace/redhat-operators-gxnf2" Jan 31 04:00:58 crc kubenswrapper[4667]: I0131 04:00:58.737552 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rmmq\" (UniqueName: \"kubernetes.io/projected/3c56e881-0420-4250-98d6-896940599521-kube-api-access-8rmmq\") pod \"redhat-operators-gxnf2\" (UID: \"3c56e881-0420-4250-98d6-896940599521\") " pod="openshift-marketplace/redhat-operators-gxnf2" Jan 31 04:00:58 crc kubenswrapper[4667]: I0131 04:00:58.744445 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gxnf2"] Jan 31 04:00:58 crc kubenswrapper[4667]: I0131 04:00:58.787040 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-wjsth_4f281370-6419-4dfb-b21f-9d1c9c7eddaa/console/0.log" Jan 31 04:00:58 crc kubenswrapper[4667]: I0131 04:00:58.787126 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-wjsth" Jan 31 04:00:58 crc kubenswrapper[4667]: I0131 04:00:58.839017 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c56e881-0420-4250-98d6-896940599521-catalog-content\") pod \"redhat-operators-gxnf2\" (UID: \"3c56e881-0420-4250-98d6-896940599521\") " pod="openshift-marketplace/redhat-operators-gxnf2" Jan 31 04:00:58 crc kubenswrapper[4667]: I0131 04:00:58.839420 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rmmq\" (UniqueName: \"kubernetes.io/projected/3c56e881-0420-4250-98d6-896940599521-kube-api-access-8rmmq\") pod \"redhat-operators-gxnf2\" (UID: \"3c56e881-0420-4250-98d6-896940599521\") " pod="openshift-marketplace/redhat-operators-gxnf2" Jan 31 04:00:58 crc kubenswrapper[4667]: I0131 04:00:58.839514 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c56e881-0420-4250-98d6-896940599521-utilities\") pod \"redhat-operators-gxnf2\" (UID: \"3c56e881-0420-4250-98d6-896940599521\") " pod="openshift-marketplace/redhat-operators-gxnf2" Jan 31 04:00:58 crc kubenswrapper[4667]: I0131 04:00:58.840177 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c56e881-0420-4250-98d6-896940599521-catalog-content\") pod \"redhat-operators-gxnf2\" (UID: \"3c56e881-0420-4250-98d6-896940599521\") " pod="openshift-marketplace/redhat-operators-gxnf2" Jan 31 04:00:58 crc kubenswrapper[4667]: I0131 04:00:58.840720 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c56e881-0420-4250-98d6-896940599521-utilities\") pod \"redhat-operators-gxnf2\" (UID: \"3c56e881-0420-4250-98d6-896940599521\") " pod="openshift-marketplace/redhat-operators-gxnf2" Jan 31 04:00:58 crc kubenswrapper[4667]: I0131 04:00:58.867666 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rmmq\" (UniqueName: \"kubernetes.io/projected/3c56e881-0420-4250-98d6-896940599521-kube-api-access-8rmmq\") pod \"redhat-operators-gxnf2\" (UID: \"3c56e881-0420-4250-98d6-896940599521\") " pod="openshift-marketplace/redhat-operators-gxnf2" Jan 31 04:00:58 crc kubenswrapper[4667]: I0131 04:00:58.940376 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4f281370-6419-4dfb-b21f-9d1c9c7eddaa-console-oauth-config\") pod \"4f281370-6419-4dfb-b21f-9d1c9c7eddaa\" (UID: \"4f281370-6419-4dfb-b21f-9d1c9c7eddaa\") " Jan 31 04:00:58 crc kubenswrapper[4667]: I0131 04:00:58.940434 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4f281370-6419-4dfb-b21f-9d1c9c7eddaa-service-ca\") pod \"4f281370-6419-4dfb-b21f-9d1c9c7eddaa\" (UID: \"4f281370-6419-4dfb-b21f-9d1c9c7eddaa\") " Jan 31 04:00:58 crc kubenswrapper[4667]: I0131 04:00:58.940477 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f281370-6419-4dfb-b21f-9d1c9c7eddaa-trusted-ca-bundle\") pod \"4f281370-6419-4dfb-b21f-9d1c9c7eddaa\" (UID: \"4f281370-6419-4dfb-b21f-9d1c9c7eddaa\") " Jan 31 04:00:58 crc kubenswrapper[4667]: I0131 04:00:58.940540 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4f281370-6419-4dfb-b21f-9d1c9c7eddaa-oauth-serving-cert\") pod \"4f281370-6419-4dfb-b21f-9d1c9c7eddaa\" (UID: \"4f281370-6419-4dfb-b21f-9d1c9c7eddaa\") " Jan 31 04:00:58 crc kubenswrapper[4667]: I0131 04:00:58.940578 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dv2f\" (UniqueName: \"kubernetes.io/projected/4f281370-6419-4dfb-b21f-9d1c9c7eddaa-kube-api-access-8dv2f\") pod \"4f281370-6419-4dfb-b21f-9d1c9c7eddaa\" (UID: \"4f281370-6419-4dfb-b21f-9d1c9c7eddaa\") " Jan 31 04:00:58 crc kubenswrapper[4667]: I0131 04:00:58.940599 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4f281370-6419-4dfb-b21f-9d1c9c7eddaa-console-serving-cert\") pod \"4f281370-6419-4dfb-b21f-9d1c9c7eddaa\" (UID: \"4f281370-6419-4dfb-b21f-9d1c9c7eddaa\") " Jan 31 04:00:58 crc kubenswrapper[4667]: I0131 04:00:58.940623 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4f281370-6419-4dfb-b21f-9d1c9c7eddaa-console-config\") pod \"4f281370-6419-4dfb-b21f-9d1c9c7eddaa\" (UID: \"4f281370-6419-4dfb-b21f-9d1c9c7eddaa\") " Jan 31 04:00:58 crc kubenswrapper[4667]: I0131 04:00:58.941774 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f281370-6419-4dfb-b21f-9d1c9c7eddaa-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "4f281370-6419-4dfb-b21f-9d1c9c7eddaa" (UID: "4f281370-6419-4dfb-b21f-9d1c9c7eddaa"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:00:58 crc kubenswrapper[4667]: I0131 04:00:58.941926 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f281370-6419-4dfb-b21f-9d1c9c7eddaa-service-ca" (OuterVolumeSpecName: "service-ca") pod "4f281370-6419-4dfb-b21f-9d1c9c7eddaa" (UID: "4f281370-6419-4dfb-b21f-9d1c9c7eddaa"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:00:58 crc kubenswrapper[4667]: I0131 04:00:58.942477 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f281370-6419-4dfb-b21f-9d1c9c7eddaa-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "4f281370-6419-4dfb-b21f-9d1c9c7eddaa" (UID: "4f281370-6419-4dfb-b21f-9d1c9c7eddaa"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:00:58 crc kubenswrapper[4667]: I0131 04:00:58.942991 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f281370-6419-4dfb-b21f-9d1c9c7eddaa-console-config" (OuterVolumeSpecName: "console-config") pod "4f281370-6419-4dfb-b21f-9d1c9c7eddaa" (UID: "4f281370-6419-4dfb-b21f-9d1c9c7eddaa"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:00:58 crc kubenswrapper[4667]: I0131 04:00:58.970236 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f281370-6419-4dfb-b21f-9d1c9c7eddaa-kube-api-access-8dv2f" (OuterVolumeSpecName: "kube-api-access-8dv2f") pod "4f281370-6419-4dfb-b21f-9d1c9c7eddaa" (UID: "4f281370-6419-4dfb-b21f-9d1c9c7eddaa"). InnerVolumeSpecName "kube-api-access-8dv2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:00:58 crc kubenswrapper[4667]: I0131 04:00:58.970990 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f281370-6419-4dfb-b21f-9d1c9c7eddaa-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "4f281370-6419-4dfb-b21f-9d1c9c7eddaa" (UID: "4f281370-6419-4dfb-b21f-9d1c9c7eddaa"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:00:58 crc kubenswrapper[4667]: I0131 04:00:58.971566 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f281370-6419-4dfb-b21f-9d1c9c7eddaa-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "4f281370-6419-4dfb-b21f-9d1c9c7eddaa" (UID: "4f281370-6419-4dfb-b21f-9d1c9c7eddaa"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:00:59 crc kubenswrapper[4667]: I0131 04:00:59.042365 4667 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4f281370-6419-4dfb-b21f-9d1c9c7eddaa-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:00:59 crc kubenswrapper[4667]: I0131 04:00:59.042414 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dv2f\" (UniqueName: \"kubernetes.io/projected/4f281370-6419-4dfb-b21f-9d1c9c7eddaa-kube-api-access-8dv2f\") on node \"crc\" DevicePath \"\"" Jan 31 04:00:59 crc kubenswrapper[4667]: I0131 04:00:59.042426 4667 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4f281370-6419-4dfb-b21f-9d1c9c7eddaa-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 04:00:59 crc kubenswrapper[4667]: I0131 04:00:59.042435 4667 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4f281370-6419-4dfb-b21f-9d1c9c7eddaa-console-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:00:59 crc kubenswrapper[4667]: I0131 04:00:59.042444 4667 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4f281370-6419-4dfb-b21f-9d1c9c7eddaa-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:00:59 crc kubenswrapper[4667]: I0131 04:00:59.042455 4667 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4f281370-6419-4dfb-b21f-9d1c9c7eddaa-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 04:00:59 crc kubenswrapper[4667]: I0131 04:00:59.042465 4667 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f281370-6419-4dfb-b21f-9d1c9c7eddaa-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:00:59 crc kubenswrapper[4667]: I0131 04:00:59.081071 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gxnf2" Jan 31 04:00:59 crc kubenswrapper[4667]: I0131 04:00:59.316603 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gxnf2"] Jan 31 04:00:59 crc kubenswrapper[4667]: W0131 04:00:59.321357 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c56e881_0420_4250_98d6_896940599521.slice/crio-87a12e3c8468ae51f3486b2122e25b911f2a0f0a71f823d2f176fcf7050ac5de WatchSource:0}: Error finding container 87a12e3c8468ae51f3486b2122e25b911f2a0f0a71f823d2f176fcf7050ac5de: Status 404 returned error can't find the container with id 87a12e3c8468ae51f3486b2122e25b911f2a0f0a71f823d2f176fcf7050ac5de Jan 31 04:00:59 crc kubenswrapper[4667]: I0131 04:00:59.355362 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-wjsth_4f281370-6419-4dfb-b21f-9d1c9c7eddaa/console/0.log" Jan 31 04:00:59 crc kubenswrapper[4667]: I0131 04:00:59.355411 4667 generic.go:334] "Generic (PLEG): container finished" podID="4f281370-6419-4dfb-b21f-9d1c9c7eddaa" containerID="8e0b5bceb97157464b0e39472d0fb5e7c96918020db6f8b97a4b753317739cd6" exitCode=2 Jan 31 04:00:59 crc kubenswrapper[4667]: I0131 04:00:59.355469 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-wjsth" event={"ID":"4f281370-6419-4dfb-b21f-9d1c9c7eddaa","Type":"ContainerDied","Data":"8e0b5bceb97157464b0e39472d0fb5e7c96918020db6f8b97a4b753317739cd6"} Jan 31 04:00:59 crc kubenswrapper[4667]: I0131 04:00:59.355504 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-wjsth" event={"ID":"4f281370-6419-4dfb-b21f-9d1c9c7eddaa","Type":"ContainerDied","Data":"ec5eda3ca334e4fe609b32378cf754c8b3faf595211461fbc3e6d836a8f1a033"} Jan 31 04:00:59 crc kubenswrapper[4667]: I0131 04:00:59.355524 4667 scope.go:117] "RemoveContainer" containerID="8e0b5bceb97157464b0e39472d0fb5e7c96918020db6f8b97a4b753317739cd6" Jan 31 04:00:59 crc kubenswrapper[4667]: I0131 04:00:59.355624 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-wjsth" Jan 31 04:00:59 crc kubenswrapper[4667]: I0131 04:00:59.356930 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gxnf2" event={"ID":"3c56e881-0420-4250-98d6-896940599521","Type":"ContainerStarted","Data":"87a12e3c8468ae51f3486b2122e25b911f2a0f0a71f823d2f176fcf7050ac5de"} Jan 31 04:00:59 crc kubenswrapper[4667]: I0131 04:00:59.379768 4667 scope.go:117] "RemoveContainer" containerID="8e0b5bceb97157464b0e39472d0fb5e7c96918020db6f8b97a4b753317739cd6" Jan 31 04:00:59 crc kubenswrapper[4667]: I0131 04:00:59.384110 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-wjsth"] Jan 31 04:00:59 crc kubenswrapper[4667]: E0131 04:00:59.388756 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e0b5bceb97157464b0e39472d0fb5e7c96918020db6f8b97a4b753317739cd6\": container with ID starting with 8e0b5bceb97157464b0e39472d0fb5e7c96918020db6f8b97a4b753317739cd6 not found: ID does not exist" containerID="8e0b5bceb97157464b0e39472d0fb5e7c96918020db6f8b97a4b753317739cd6" Jan 31 04:00:59 crc kubenswrapper[4667]: I0131 04:00:59.388826 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e0b5bceb97157464b0e39472d0fb5e7c96918020db6f8b97a4b753317739cd6"} err="failed to get container status \"8e0b5bceb97157464b0e39472d0fb5e7c96918020db6f8b97a4b753317739cd6\": rpc error: code = NotFound desc = could not find container \"8e0b5bceb97157464b0e39472d0fb5e7c96918020db6f8b97a4b753317739cd6\": container with ID starting with 8e0b5bceb97157464b0e39472d0fb5e7c96918020db6f8b97a4b753317739cd6 not found: ID does not exist" Jan 31 04:00:59 crc kubenswrapper[4667]: I0131 04:00:59.390372 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-wjsth"] Jan 31 04:01:00 crc kubenswrapper[4667]: I0131 04:01:00.368076 4667 generic.go:334] "Generic (PLEG): container finished" podID="3c56e881-0420-4250-98d6-896940599521" containerID="f370a6bd40e1124aab6a4f8b77f811821d613b10fbba0d596eba3c8010c4935d" exitCode=0 Jan 31 04:01:00 crc kubenswrapper[4667]: I0131 04:01:00.368199 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gxnf2" event={"ID":"3c56e881-0420-4250-98d6-896940599521","Type":"ContainerDied","Data":"f370a6bd40e1124aab6a4f8b77f811821d613b10fbba0d596eba3c8010c4935d"} Jan 31 04:01:01 crc kubenswrapper[4667]: I0131 04:01:01.301186 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f281370-6419-4dfb-b21f-9d1c9c7eddaa" path="/var/lib/kubelet/pods/4f281370-6419-4dfb-b21f-9d1c9c7eddaa/volumes" Jan 31 04:01:01 crc kubenswrapper[4667]: I0131 04:01:01.380467 4667 generic.go:334] "Generic (PLEG): container finished" podID="65537ed9-39d5-40b0-82c9-a4b3d9dc6551" containerID="27bb6242394938b70b84c5d2841e4fdc810fefec73aab282efa8499dfe126234" exitCode=0 Jan 31 04:01:01 crc kubenswrapper[4667]: I0131 04:01:01.380576 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc99ffw" event={"ID":"65537ed9-39d5-40b0-82c9-a4b3d9dc6551","Type":"ContainerDied","Data":"27bb6242394938b70b84c5d2841e4fdc810fefec73aab282efa8499dfe126234"} Jan 31 04:01:01 crc kubenswrapper[4667]: I0131 04:01:01.382626 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gxnf2" event={"ID":"3c56e881-0420-4250-98d6-896940599521","Type":"ContainerStarted","Data":"252dd7074b29448e0ee536407e02e7f2711eef7712ca7768edb62e5be7d70422"} Jan 31 04:01:02 crc kubenswrapper[4667]: I0131 04:01:02.392387 4667 generic.go:334] "Generic (PLEG): container finished" podID="3c56e881-0420-4250-98d6-896940599521" containerID="252dd7074b29448e0ee536407e02e7f2711eef7712ca7768edb62e5be7d70422" exitCode=0 Jan 31 04:01:02 crc kubenswrapper[4667]: I0131 04:01:02.392475 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gxnf2" event={"ID":"3c56e881-0420-4250-98d6-896940599521","Type":"ContainerDied","Data":"252dd7074b29448e0ee536407e02e7f2711eef7712ca7768edb62e5be7d70422"} Jan 31 04:01:02 crc kubenswrapper[4667]: I0131 04:01:02.399495 4667 generic.go:334] "Generic (PLEG): container finished" podID="65537ed9-39d5-40b0-82c9-a4b3d9dc6551" containerID="585dc9a7f82d1096df812f408bb0183033bb12d9e38814011c2d7cd24bf2c156" exitCode=0 Jan 31 04:01:02 crc kubenswrapper[4667]: I0131 04:01:02.399557 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc99ffw" event={"ID":"65537ed9-39d5-40b0-82c9-a4b3d9dc6551","Type":"ContainerDied","Data":"585dc9a7f82d1096df812f408bb0183033bb12d9e38814011c2d7cd24bf2c156"} Jan 31 04:01:03 crc kubenswrapper[4667]: I0131 04:01:03.409234 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gxnf2" event={"ID":"3c56e881-0420-4250-98d6-896940599521","Type":"ContainerStarted","Data":"6a9c046ecedddb1d795d6fc537ba0c9557afefaf726ca9a855792646be76a127"} Jan 31 04:01:03 crc kubenswrapper[4667]: I0131 04:01:03.438921 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gxnf2" podStartSLOduration=2.947288381 podStartE2EDuration="5.438896632s" podCreationTimestamp="2026-01-31 04:00:58 +0000 UTC" firstStartedPulling="2026-01-31 04:01:00.372212081 +0000 UTC m=+783.888547380" lastFinishedPulling="2026-01-31 04:01:02.863820292 +0000 UTC m=+786.380155631" observedRunningTime="2026-01-31 04:01:03.434935317 +0000 UTC m=+786.951270626" watchObservedRunningTime="2026-01-31 04:01:03.438896632 +0000 UTC m=+786.955231941" Jan 31 04:01:03 crc kubenswrapper[4667]: I0131 04:01:03.711005 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc99ffw" Jan 31 04:01:03 crc kubenswrapper[4667]: I0131 04:01:03.816851 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/65537ed9-39d5-40b0-82c9-a4b3d9dc6551-util\") pod \"65537ed9-39d5-40b0-82c9-a4b3d9dc6551\" (UID: \"65537ed9-39d5-40b0-82c9-a4b3d9dc6551\") " Jan 31 04:01:03 crc kubenswrapper[4667]: I0131 04:01:03.816975 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/65537ed9-39d5-40b0-82c9-a4b3d9dc6551-bundle\") pod \"65537ed9-39d5-40b0-82c9-a4b3d9dc6551\" (UID: \"65537ed9-39d5-40b0-82c9-a4b3d9dc6551\") " Jan 31 04:01:03 crc kubenswrapper[4667]: I0131 04:01:03.816999 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dh8tk\" (UniqueName: \"kubernetes.io/projected/65537ed9-39d5-40b0-82c9-a4b3d9dc6551-kube-api-access-dh8tk\") pod \"65537ed9-39d5-40b0-82c9-a4b3d9dc6551\" (UID: \"65537ed9-39d5-40b0-82c9-a4b3d9dc6551\") " Jan 31 04:01:03 crc kubenswrapper[4667]: I0131 04:01:03.818239 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65537ed9-39d5-40b0-82c9-a4b3d9dc6551-bundle" (OuterVolumeSpecName: "bundle") pod "65537ed9-39d5-40b0-82c9-a4b3d9dc6551" (UID: "65537ed9-39d5-40b0-82c9-a4b3d9dc6551"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:01:03 crc kubenswrapper[4667]: I0131 04:01:03.828133 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65537ed9-39d5-40b0-82c9-a4b3d9dc6551-kube-api-access-dh8tk" (OuterVolumeSpecName: "kube-api-access-dh8tk") pod "65537ed9-39d5-40b0-82c9-a4b3d9dc6551" (UID: "65537ed9-39d5-40b0-82c9-a4b3d9dc6551"). InnerVolumeSpecName "kube-api-access-dh8tk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:01:03 crc kubenswrapper[4667]: I0131 04:01:03.847969 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65537ed9-39d5-40b0-82c9-a4b3d9dc6551-util" (OuterVolumeSpecName: "util") pod "65537ed9-39d5-40b0-82c9-a4b3d9dc6551" (UID: "65537ed9-39d5-40b0-82c9-a4b3d9dc6551"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:01:03 crc kubenswrapper[4667]: I0131 04:01:03.919084 4667 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/65537ed9-39d5-40b0-82c9-a4b3d9dc6551-util\") on node \"crc\" DevicePath \"\"" Jan 31 04:01:03 crc kubenswrapper[4667]: I0131 04:01:03.919126 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dh8tk\" (UniqueName: \"kubernetes.io/projected/65537ed9-39d5-40b0-82c9-a4b3d9dc6551-kube-api-access-dh8tk\") on node \"crc\" DevicePath \"\"" Jan 31 04:01:03 crc kubenswrapper[4667]: I0131 04:01:03.919139 4667 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/65537ed9-39d5-40b0-82c9-a4b3d9dc6551-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:01:04 crc kubenswrapper[4667]: I0131 04:01:04.417526 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc99ffw" Jan 31 04:01:04 crc kubenswrapper[4667]: I0131 04:01:04.418971 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc99ffw" event={"ID":"65537ed9-39d5-40b0-82c9-a4b3d9dc6551","Type":"ContainerDied","Data":"52dbc493c1a534009d7e3a2f50f01d5058a3d072790428d3d98f2122d3cc95d1"} Jan 31 04:01:04 crc kubenswrapper[4667]: I0131 04:01:04.419031 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52dbc493c1a534009d7e3a2f50f01d5058a3d072790428d3d98f2122d3cc95d1" Jan 31 04:01:09 crc kubenswrapper[4667]: I0131 04:01:09.088790 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gxnf2" Jan 31 04:01:09 crc kubenswrapper[4667]: I0131 04:01:09.089167 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gxnf2" Jan 31 04:01:10 crc kubenswrapper[4667]: I0131 04:01:10.184012 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gxnf2" podUID="3c56e881-0420-4250-98d6-896940599521" containerName="registry-server" probeResult="failure" output=< Jan 31 04:01:10 crc kubenswrapper[4667]: timeout: failed to connect service ":50051" within 1s Jan 31 04:01:10 crc kubenswrapper[4667]: > Jan 31 04:01:13 crc kubenswrapper[4667]: I0131 04:01:13.479164 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6fbb7fc476-m2zqb"] Jan 31 04:01:13 crc kubenswrapper[4667]: E0131 04:01:13.479428 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65537ed9-39d5-40b0-82c9-a4b3d9dc6551" containerName="pull" Jan 31 04:01:13 crc kubenswrapper[4667]: I0131 04:01:13.479441 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="65537ed9-39d5-40b0-82c9-a4b3d9dc6551" containerName="pull" Jan 31 04:01:13 crc kubenswrapper[4667]: E0131 04:01:13.479454 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f281370-6419-4dfb-b21f-9d1c9c7eddaa" containerName="console" Jan 31 04:01:13 crc kubenswrapper[4667]: I0131 04:01:13.479460 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f281370-6419-4dfb-b21f-9d1c9c7eddaa" containerName="console" Jan 31 04:01:13 crc kubenswrapper[4667]: E0131 04:01:13.479470 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65537ed9-39d5-40b0-82c9-a4b3d9dc6551" containerName="util" Jan 31 04:01:13 crc kubenswrapper[4667]: I0131 04:01:13.479477 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="65537ed9-39d5-40b0-82c9-a4b3d9dc6551" containerName="util" Jan 31 04:01:13 crc kubenswrapper[4667]: E0131 04:01:13.479492 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65537ed9-39d5-40b0-82c9-a4b3d9dc6551" containerName="extract" Jan 31 04:01:13 crc kubenswrapper[4667]: I0131 04:01:13.479498 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="65537ed9-39d5-40b0-82c9-a4b3d9dc6551" containerName="extract" Jan 31 04:01:13 crc kubenswrapper[4667]: I0131 04:01:13.479606 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="65537ed9-39d5-40b0-82c9-a4b3d9dc6551" containerName="extract" Jan 31 04:01:13 crc kubenswrapper[4667]: I0131 04:01:13.479622 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f281370-6419-4dfb-b21f-9d1c9c7eddaa" containerName="console" Jan 31 04:01:13 crc kubenswrapper[4667]: I0131 04:01:13.480111 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6fbb7fc476-m2zqb" Jan 31 04:01:13 crc kubenswrapper[4667]: I0131 04:01:13.494620 4667 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 31 04:01:13 crc kubenswrapper[4667]: I0131 04:01:13.495182 4667 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-2hltv" Jan 31 04:01:13 crc kubenswrapper[4667]: I0131 04:01:13.496506 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 31 04:01:13 crc kubenswrapper[4667]: I0131 04:01:13.499943 4667 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 31 04:01:13 crc kubenswrapper[4667]: I0131 04:01:13.506039 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 31 04:01:13 crc kubenswrapper[4667]: I0131 04:01:13.530573 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6fbb7fc476-m2zqb"] Jan 31 04:01:13 crc kubenswrapper[4667]: I0131 04:01:13.561972 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/22c596d0-b347-4dd0-ab61-7560ec9f5636-webhook-cert\") pod \"metallb-operator-controller-manager-6fbb7fc476-m2zqb\" (UID: \"22c596d0-b347-4dd0-ab61-7560ec9f5636\") " pod="metallb-system/metallb-operator-controller-manager-6fbb7fc476-m2zqb" Jan 31 04:01:13 crc kubenswrapper[4667]: I0131 04:01:13.562033 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/22c596d0-b347-4dd0-ab61-7560ec9f5636-apiservice-cert\") pod \"metallb-operator-controller-manager-6fbb7fc476-m2zqb\" (UID: \"22c596d0-b347-4dd0-ab61-7560ec9f5636\") " pod="metallb-system/metallb-operator-controller-manager-6fbb7fc476-m2zqb" Jan 31 04:01:13 crc kubenswrapper[4667]: I0131 04:01:13.562078 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w2nd\" (UniqueName: \"kubernetes.io/projected/22c596d0-b347-4dd0-ab61-7560ec9f5636-kube-api-access-7w2nd\") pod \"metallb-operator-controller-manager-6fbb7fc476-m2zqb\" (UID: \"22c596d0-b347-4dd0-ab61-7560ec9f5636\") " pod="metallb-system/metallb-operator-controller-manager-6fbb7fc476-m2zqb" Jan 31 04:01:13 crc kubenswrapper[4667]: I0131 04:01:13.663777 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/22c596d0-b347-4dd0-ab61-7560ec9f5636-webhook-cert\") pod \"metallb-operator-controller-manager-6fbb7fc476-m2zqb\" (UID: \"22c596d0-b347-4dd0-ab61-7560ec9f5636\") " pod="metallb-system/metallb-operator-controller-manager-6fbb7fc476-m2zqb" Jan 31 04:01:13 crc kubenswrapper[4667]: I0131 04:01:13.663853 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/22c596d0-b347-4dd0-ab61-7560ec9f5636-apiservice-cert\") pod \"metallb-operator-controller-manager-6fbb7fc476-m2zqb\" (UID: \"22c596d0-b347-4dd0-ab61-7560ec9f5636\") " pod="metallb-system/metallb-operator-controller-manager-6fbb7fc476-m2zqb" Jan 31 04:01:13 crc kubenswrapper[4667]: I0131 04:01:13.663936 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w2nd\" (UniqueName: \"kubernetes.io/projected/22c596d0-b347-4dd0-ab61-7560ec9f5636-kube-api-access-7w2nd\") pod \"metallb-operator-controller-manager-6fbb7fc476-m2zqb\" (UID: \"22c596d0-b347-4dd0-ab61-7560ec9f5636\") " pod="metallb-system/metallb-operator-controller-manager-6fbb7fc476-m2zqb" Jan 31 04:01:13 crc kubenswrapper[4667]: I0131 04:01:13.671504 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/22c596d0-b347-4dd0-ab61-7560ec9f5636-apiservice-cert\") pod \"metallb-operator-controller-manager-6fbb7fc476-m2zqb\" (UID: \"22c596d0-b347-4dd0-ab61-7560ec9f5636\") " pod="metallb-system/metallb-operator-controller-manager-6fbb7fc476-m2zqb" Jan 31 04:01:13 crc kubenswrapper[4667]: I0131 04:01:13.673440 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/22c596d0-b347-4dd0-ab61-7560ec9f5636-webhook-cert\") pod \"metallb-operator-controller-manager-6fbb7fc476-m2zqb\" (UID: \"22c596d0-b347-4dd0-ab61-7560ec9f5636\") " pod="metallb-system/metallb-operator-controller-manager-6fbb7fc476-m2zqb" Jan 31 04:01:13 crc kubenswrapper[4667]: I0131 04:01:13.703730 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w2nd\" (UniqueName: \"kubernetes.io/projected/22c596d0-b347-4dd0-ab61-7560ec9f5636-kube-api-access-7w2nd\") pod \"metallb-operator-controller-manager-6fbb7fc476-m2zqb\" (UID: \"22c596d0-b347-4dd0-ab61-7560ec9f5636\") " pod="metallb-system/metallb-operator-controller-manager-6fbb7fc476-m2zqb" Jan 31 04:01:13 crc kubenswrapper[4667]: I0131 04:01:13.799416 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6fbb7fc476-m2zqb" Jan 31 04:01:13 crc kubenswrapper[4667]: I0131 04:01:13.964364 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-86d6b8c8bf-jddpp"] Jan 31 04:01:13 crc kubenswrapper[4667]: I0131 04:01:13.965153 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-86d6b8c8bf-jddpp" Jan 31 04:01:13 crc kubenswrapper[4667]: I0131 04:01:13.984965 4667 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 31 04:01:13 crc kubenswrapper[4667]: I0131 04:01:13.985228 4667 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-htfzw" Jan 31 04:01:13 crc kubenswrapper[4667]: I0131 04:01:13.985599 4667 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 31 04:01:14 crc kubenswrapper[4667]: I0131 04:01:14.045285 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-86d6b8c8bf-jddpp"] Jan 31 04:01:14 crc kubenswrapper[4667]: I0131 04:01:14.069741 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a9fc0a54-a93e-4113-8b7c-25015ed1cb60-apiservice-cert\") pod \"metallb-operator-webhook-server-86d6b8c8bf-jddpp\" (UID: \"a9fc0a54-a93e-4113-8b7c-25015ed1cb60\") " pod="metallb-system/metallb-operator-webhook-server-86d6b8c8bf-jddpp" Jan 31 04:01:14 crc kubenswrapper[4667]: I0131 04:01:14.069817 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a9fc0a54-a93e-4113-8b7c-25015ed1cb60-webhook-cert\") pod \"metallb-operator-webhook-server-86d6b8c8bf-jddpp\" (UID: \"a9fc0a54-a93e-4113-8b7c-25015ed1cb60\") " pod="metallb-system/metallb-operator-webhook-server-86d6b8c8bf-jddpp" Jan 31 04:01:14 crc kubenswrapper[4667]: I0131 04:01:14.069886 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpf7c\" (UniqueName: \"kubernetes.io/projected/a9fc0a54-a93e-4113-8b7c-25015ed1cb60-kube-api-access-hpf7c\") pod \"metallb-operator-webhook-server-86d6b8c8bf-jddpp\" (UID: \"a9fc0a54-a93e-4113-8b7c-25015ed1cb60\") " pod="metallb-system/metallb-operator-webhook-server-86d6b8c8bf-jddpp" Jan 31 04:01:14 crc kubenswrapper[4667]: I0131 04:01:14.171371 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a9fc0a54-a93e-4113-8b7c-25015ed1cb60-webhook-cert\") pod \"metallb-operator-webhook-server-86d6b8c8bf-jddpp\" (UID: \"a9fc0a54-a93e-4113-8b7c-25015ed1cb60\") " pod="metallb-system/metallb-operator-webhook-server-86d6b8c8bf-jddpp" Jan 31 04:01:14 crc kubenswrapper[4667]: I0131 04:01:14.171438 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpf7c\" (UniqueName: \"kubernetes.io/projected/a9fc0a54-a93e-4113-8b7c-25015ed1cb60-kube-api-access-hpf7c\") pod \"metallb-operator-webhook-server-86d6b8c8bf-jddpp\" (UID: \"a9fc0a54-a93e-4113-8b7c-25015ed1cb60\") " pod="metallb-system/metallb-operator-webhook-server-86d6b8c8bf-jddpp" Jan 31 04:01:14 crc kubenswrapper[4667]: I0131 04:01:14.171483 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a9fc0a54-a93e-4113-8b7c-25015ed1cb60-apiservice-cert\") pod \"metallb-operator-webhook-server-86d6b8c8bf-jddpp\" (UID: \"a9fc0a54-a93e-4113-8b7c-25015ed1cb60\") " pod="metallb-system/metallb-operator-webhook-server-86d6b8c8bf-jddpp" Jan 31 04:01:14 crc kubenswrapper[4667]: I0131 04:01:14.185161 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a9fc0a54-a93e-4113-8b7c-25015ed1cb60-apiservice-cert\") pod \"metallb-operator-webhook-server-86d6b8c8bf-jddpp\" (UID: \"a9fc0a54-a93e-4113-8b7c-25015ed1cb60\") " pod="metallb-system/metallb-operator-webhook-server-86d6b8c8bf-jddpp" Jan 31 04:01:14 crc kubenswrapper[4667]: I0131 04:01:14.194242 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a9fc0a54-a93e-4113-8b7c-25015ed1cb60-webhook-cert\") pod \"metallb-operator-webhook-server-86d6b8c8bf-jddpp\" (UID: \"a9fc0a54-a93e-4113-8b7c-25015ed1cb60\") " pod="metallb-system/metallb-operator-webhook-server-86d6b8c8bf-jddpp" Jan 31 04:01:14 crc kubenswrapper[4667]: I0131 04:01:14.194814 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpf7c\" (UniqueName: \"kubernetes.io/projected/a9fc0a54-a93e-4113-8b7c-25015ed1cb60-kube-api-access-hpf7c\") pod \"metallb-operator-webhook-server-86d6b8c8bf-jddpp\" (UID: \"a9fc0a54-a93e-4113-8b7c-25015ed1cb60\") " pod="metallb-system/metallb-operator-webhook-server-86d6b8c8bf-jddpp" Jan 31 04:01:14 crc kubenswrapper[4667]: I0131 04:01:14.229889 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6fbb7fc476-m2zqb"] Jan 31 04:01:14 crc kubenswrapper[4667]: I0131 04:01:14.299655 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-86d6b8c8bf-jddpp" Jan 31 04:01:14 crc kubenswrapper[4667]: I0131 04:01:14.493766 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6fbb7fc476-m2zqb" event={"ID":"22c596d0-b347-4dd0-ab61-7560ec9f5636","Type":"ContainerStarted","Data":"98353846cbc18858ba9270fd72e4c6f477aeb8c9e32d58c7b9be44fa322c7745"} Jan 31 04:01:14 crc kubenswrapper[4667]: I0131 04:01:14.769951 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-86d6b8c8bf-jddpp"] Jan 31 04:01:14 crc kubenswrapper[4667]: W0131 04:01:14.785118 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9fc0a54_a93e_4113_8b7c_25015ed1cb60.slice/crio-98357aa978818ae3a690a486ab83681e188e9de6646a8e89c6960d128a2c2a9d WatchSource:0}: Error finding container 98357aa978818ae3a690a486ab83681e188e9de6646a8e89c6960d128a2c2a9d: Status 404 returned error can't find the container with id 98357aa978818ae3a690a486ab83681e188e9de6646a8e89c6960d128a2c2a9d Jan 31 04:01:15 crc kubenswrapper[4667]: I0131 04:01:15.504664 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-86d6b8c8bf-jddpp" event={"ID":"a9fc0a54-a93e-4113-8b7c-25015ed1cb60","Type":"ContainerStarted","Data":"98357aa978818ae3a690a486ab83681e188e9de6646a8e89c6960d128a2c2a9d"} Jan 31 04:01:15 crc kubenswrapper[4667]: I0131 04:01:15.704834 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:01:15 crc kubenswrapper[4667]: I0131 04:01:15.705279 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:01:19 crc kubenswrapper[4667]: I0131 04:01:19.138235 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gxnf2" Jan 31 04:01:19 crc kubenswrapper[4667]: I0131 04:01:19.196095 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gxnf2" Jan 31 04:01:20 crc kubenswrapper[4667]: I0131 04:01:20.552647 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6fbb7fc476-m2zqb" event={"ID":"22c596d0-b347-4dd0-ab61-7560ec9f5636","Type":"ContainerStarted","Data":"9256263abba2b2c0c540b2552c3ae8db2c217ff2fb38622c14f7756e8760f554"} Jan 31 04:01:20 crc kubenswrapper[4667]: I0131 04:01:20.553507 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6fbb7fc476-m2zqb" Jan 31 04:01:20 crc kubenswrapper[4667]: I0131 04:01:20.576198 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6fbb7fc476-m2zqb" podStartSLOduration=1.9307066210000001 podStartE2EDuration="7.576174121s" podCreationTimestamp="2026-01-31 04:01:13 +0000 UTC" firstStartedPulling="2026-01-31 04:01:14.248777373 +0000 UTC m=+797.765112672" lastFinishedPulling="2026-01-31 04:01:19.894244873 +0000 UTC m=+803.410580172" observedRunningTime="2026-01-31 04:01:20.573045948 +0000 UTC m=+804.089381247" watchObservedRunningTime="2026-01-31 04:01:20.576174121 +0000 UTC m=+804.092509420" Jan 31 04:01:21 crc kubenswrapper[4667]: I0131 04:01:21.527144 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gxnf2"] Jan 31 04:01:21 crc kubenswrapper[4667]: I0131 04:01:21.527411 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gxnf2" podUID="3c56e881-0420-4250-98d6-896940599521" containerName="registry-server" containerID="cri-o://6a9c046ecedddb1d795d6fc537ba0c9557afefaf726ca9a855792646be76a127" gracePeriod=2 Jan 31 04:01:22 crc kubenswrapper[4667]: I0131 04:01:22.448713 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gxnf2" Jan 31 04:01:22 crc kubenswrapper[4667]: I0131 04:01:22.527198 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rmmq\" (UniqueName: \"kubernetes.io/projected/3c56e881-0420-4250-98d6-896940599521-kube-api-access-8rmmq\") pod \"3c56e881-0420-4250-98d6-896940599521\" (UID: \"3c56e881-0420-4250-98d6-896940599521\") " Jan 31 04:01:22 crc kubenswrapper[4667]: I0131 04:01:22.527408 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c56e881-0420-4250-98d6-896940599521-catalog-content\") pod \"3c56e881-0420-4250-98d6-896940599521\" (UID: \"3c56e881-0420-4250-98d6-896940599521\") " Jan 31 04:01:22 crc kubenswrapper[4667]: I0131 04:01:22.527474 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c56e881-0420-4250-98d6-896940599521-utilities\") pod \"3c56e881-0420-4250-98d6-896940599521\" (UID: \"3c56e881-0420-4250-98d6-896940599521\") " Jan 31 04:01:22 crc kubenswrapper[4667]: I0131 04:01:22.528612 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c56e881-0420-4250-98d6-896940599521-utilities" (OuterVolumeSpecName: "utilities") pod "3c56e881-0420-4250-98d6-896940599521" (UID: "3c56e881-0420-4250-98d6-896940599521"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:01:22 crc kubenswrapper[4667]: I0131 04:01:22.535163 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c56e881-0420-4250-98d6-896940599521-kube-api-access-8rmmq" (OuterVolumeSpecName: "kube-api-access-8rmmq") pod "3c56e881-0420-4250-98d6-896940599521" (UID: "3c56e881-0420-4250-98d6-896940599521"). InnerVolumeSpecName "kube-api-access-8rmmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:01:22 crc kubenswrapper[4667]: I0131 04:01:22.570534 4667 generic.go:334] "Generic (PLEG): container finished" podID="3c56e881-0420-4250-98d6-896940599521" containerID="6a9c046ecedddb1d795d6fc537ba0c9557afefaf726ca9a855792646be76a127" exitCode=0 Jan 31 04:01:22 crc kubenswrapper[4667]: I0131 04:01:22.570622 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gxnf2" Jan 31 04:01:22 crc kubenswrapper[4667]: I0131 04:01:22.570638 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gxnf2" event={"ID":"3c56e881-0420-4250-98d6-896940599521","Type":"ContainerDied","Data":"6a9c046ecedddb1d795d6fc537ba0c9557afefaf726ca9a855792646be76a127"} Jan 31 04:01:22 crc kubenswrapper[4667]: I0131 04:01:22.570684 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gxnf2" event={"ID":"3c56e881-0420-4250-98d6-896940599521","Type":"ContainerDied","Data":"87a12e3c8468ae51f3486b2122e25b911f2a0f0a71f823d2f176fcf7050ac5de"} Jan 31 04:01:22 crc kubenswrapper[4667]: I0131 04:01:22.570709 4667 scope.go:117] "RemoveContainer" containerID="6a9c046ecedddb1d795d6fc537ba0c9557afefaf726ca9a855792646be76a127" Jan 31 04:01:22 crc kubenswrapper[4667]: I0131 04:01:22.573747 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-86d6b8c8bf-jddpp" event={"ID":"a9fc0a54-a93e-4113-8b7c-25015ed1cb60","Type":"ContainerStarted","Data":"8df7edaeff055dabb653a9256deff7794baf00d59903d2125a63e3e5f31beb48"} Jan 31 04:01:22 crc kubenswrapper[4667]: I0131 04:01:22.575213 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-86d6b8c8bf-jddpp" Jan 31 04:01:22 crc kubenswrapper[4667]: I0131 04:01:22.593213 4667 scope.go:117] "RemoveContainer" containerID="252dd7074b29448e0ee536407e02e7f2711eef7712ca7768edb62e5be7d70422" Jan 31 04:01:22 crc kubenswrapper[4667]: I0131 04:01:22.600114 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-86d6b8c8bf-jddpp" podStartSLOduration=2.186648034 podStartE2EDuration="9.600088015s" podCreationTimestamp="2026-01-31 04:01:13 +0000 UTC" firstStartedPulling="2026-01-31 04:01:14.787246344 +0000 UTC m=+798.303581643" lastFinishedPulling="2026-01-31 04:01:22.200686325 +0000 UTC m=+805.717021624" observedRunningTime="2026-01-31 04:01:22.599286784 +0000 UTC m=+806.115622083" watchObservedRunningTime="2026-01-31 04:01:22.600088015 +0000 UTC m=+806.116423324" Jan 31 04:01:22 crc kubenswrapper[4667]: I0131 04:01:22.622648 4667 scope.go:117] "RemoveContainer" containerID="f370a6bd40e1124aab6a4f8b77f811821d613b10fbba0d596eba3c8010c4935d" Jan 31 04:01:22 crc kubenswrapper[4667]: I0131 04:01:22.629661 4667 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c56e881-0420-4250-98d6-896940599521-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:01:22 crc kubenswrapper[4667]: I0131 04:01:22.629804 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rmmq\" (UniqueName: \"kubernetes.io/projected/3c56e881-0420-4250-98d6-896940599521-kube-api-access-8rmmq\") on node \"crc\" DevicePath \"\"" Jan 31 04:01:22 crc kubenswrapper[4667]: I0131 04:01:22.648649 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c56e881-0420-4250-98d6-896940599521-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3c56e881-0420-4250-98d6-896940599521" (UID: "3c56e881-0420-4250-98d6-896940599521"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:01:22 crc kubenswrapper[4667]: I0131 04:01:22.650553 4667 scope.go:117] "RemoveContainer" containerID="6a9c046ecedddb1d795d6fc537ba0c9557afefaf726ca9a855792646be76a127" Jan 31 04:01:22 crc kubenswrapper[4667]: E0131 04:01:22.651233 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a9c046ecedddb1d795d6fc537ba0c9557afefaf726ca9a855792646be76a127\": container with ID starting with 6a9c046ecedddb1d795d6fc537ba0c9557afefaf726ca9a855792646be76a127 not found: ID does not exist" containerID="6a9c046ecedddb1d795d6fc537ba0c9557afefaf726ca9a855792646be76a127" Jan 31 04:01:22 crc kubenswrapper[4667]: I0131 04:01:22.651362 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a9c046ecedddb1d795d6fc537ba0c9557afefaf726ca9a855792646be76a127"} err="failed to get container status \"6a9c046ecedddb1d795d6fc537ba0c9557afefaf726ca9a855792646be76a127\": rpc error: code = NotFound desc = could not find container \"6a9c046ecedddb1d795d6fc537ba0c9557afefaf726ca9a855792646be76a127\": container with ID starting with 6a9c046ecedddb1d795d6fc537ba0c9557afefaf726ca9a855792646be76a127 not found: ID does not exist" Jan 31 04:01:22 crc kubenswrapper[4667]: I0131 04:01:22.651496 4667 scope.go:117] "RemoveContainer" containerID="252dd7074b29448e0ee536407e02e7f2711eef7712ca7768edb62e5be7d70422" Jan 31 04:01:22 crc kubenswrapper[4667]: E0131 04:01:22.652093 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"252dd7074b29448e0ee536407e02e7f2711eef7712ca7768edb62e5be7d70422\": container with ID starting with 252dd7074b29448e0ee536407e02e7f2711eef7712ca7768edb62e5be7d70422 not found: ID does not exist" containerID="252dd7074b29448e0ee536407e02e7f2711eef7712ca7768edb62e5be7d70422" Jan 31 04:01:22 crc kubenswrapper[4667]: I0131 04:01:22.652184 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"252dd7074b29448e0ee536407e02e7f2711eef7712ca7768edb62e5be7d70422"} err="failed to get container status \"252dd7074b29448e0ee536407e02e7f2711eef7712ca7768edb62e5be7d70422\": rpc error: code = NotFound desc = could not find container \"252dd7074b29448e0ee536407e02e7f2711eef7712ca7768edb62e5be7d70422\": container with ID starting with 252dd7074b29448e0ee536407e02e7f2711eef7712ca7768edb62e5be7d70422 not found: ID does not exist" Jan 31 04:01:22 crc kubenswrapper[4667]: I0131 04:01:22.652357 4667 scope.go:117] "RemoveContainer" containerID="f370a6bd40e1124aab6a4f8b77f811821d613b10fbba0d596eba3c8010c4935d" Jan 31 04:01:22 crc kubenswrapper[4667]: E0131 04:01:22.652991 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f370a6bd40e1124aab6a4f8b77f811821d613b10fbba0d596eba3c8010c4935d\": container with ID starting with f370a6bd40e1124aab6a4f8b77f811821d613b10fbba0d596eba3c8010c4935d not found: ID does not exist" containerID="f370a6bd40e1124aab6a4f8b77f811821d613b10fbba0d596eba3c8010c4935d" Jan 31 04:01:22 crc kubenswrapper[4667]: I0131 04:01:22.653038 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f370a6bd40e1124aab6a4f8b77f811821d613b10fbba0d596eba3c8010c4935d"} err="failed to get container status \"f370a6bd40e1124aab6a4f8b77f811821d613b10fbba0d596eba3c8010c4935d\": rpc error: code = NotFound desc = could not find container \"f370a6bd40e1124aab6a4f8b77f811821d613b10fbba0d596eba3c8010c4935d\": container with ID starting with f370a6bd40e1124aab6a4f8b77f811821d613b10fbba0d596eba3c8010c4935d not found: ID does not exist" Jan 31 04:01:22 crc kubenswrapper[4667]: I0131 04:01:22.731319 4667 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c56e881-0420-4250-98d6-896940599521-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:01:22 crc kubenswrapper[4667]: I0131 04:01:22.907634 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gxnf2"] Jan 31 04:01:22 crc kubenswrapper[4667]: I0131 04:01:22.920253 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gxnf2"] Jan 31 04:01:23 crc kubenswrapper[4667]: I0131 04:01:23.296524 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c56e881-0420-4250-98d6-896940599521" path="/var/lib/kubelet/pods/3c56e881-0420-4250-98d6-896940599521/volumes" Jan 31 04:01:34 crc kubenswrapper[4667]: I0131 04:01:34.309536 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-86d6b8c8bf-jddpp" Jan 31 04:01:45 crc kubenswrapper[4667]: I0131 04:01:45.704663 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:01:45 crc kubenswrapper[4667]: I0131 04:01:45.705591 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:01:53 crc kubenswrapper[4667]: I0131 04:01:53.804340 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6fbb7fc476-m2zqb" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.507192 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-7tjsc"] Jan 31 04:01:54 crc kubenswrapper[4667]: E0131 04:01:54.507859 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c56e881-0420-4250-98d6-896940599521" containerName="registry-server" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.507875 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c56e881-0420-4250-98d6-896940599521" containerName="registry-server" Jan 31 04:01:54 crc kubenswrapper[4667]: E0131 04:01:54.507902 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c56e881-0420-4250-98d6-896940599521" containerName="extract-content" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.507910 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c56e881-0420-4250-98d6-896940599521" containerName="extract-content" Jan 31 04:01:54 crc kubenswrapper[4667]: E0131 04:01:54.507929 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c56e881-0420-4250-98d6-896940599521" containerName="extract-utilities" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.507937 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c56e881-0420-4250-98d6-896940599521" containerName="extract-utilities" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.508087 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c56e881-0420-4250-98d6-896940599521" containerName="registry-server" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.508605 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7tjsc" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.511403 4667 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-bmn5v" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.511568 4667 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.513971 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-h45xh"] Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.517008 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-h45xh" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.517531 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-7tjsc"] Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.518733 4667 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.519241 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.583294 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf8fd966-64cf-493a-b75c-2588e084afb8-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-7tjsc\" (UID: \"bf8fd966-64cf-493a-b75c-2588e084afb8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7tjsc" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.583377 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phkxj\" (UniqueName: \"kubernetes.io/projected/bf8fd966-64cf-493a-b75c-2588e084afb8-kube-api-access-phkxj\") pod \"frr-k8s-webhook-server-7df86c4f6c-7tjsc\" (UID: \"bf8fd966-64cf-493a-b75c-2588e084afb8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7tjsc" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.642001 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-tqnx9"] Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.643222 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-tqnx9" Jan 31 04:01:54 crc kubenswrapper[4667]: W0131 04:01:54.648262 4667 reflector.go:561] object-"metallb-system"/"metallb-excludel2": failed to list *v1.ConfigMap: configmaps "metallb-excludel2" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Jan 31 04:01:54 crc kubenswrapper[4667]: E0131 04:01:54.648327 4667 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-excludel2\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"metallb-excludel2\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 04:01:54 crc kubenswrapper[4667]: W0131 04:01:54.649587 4667 reflector.go:561] object-"metallb-system"/"speaker-certs-secret": failed to list *v1.Secret: secrets "speaker-certs-secret" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Jan 31 04:01:54 crc kubenswrapper[4667]: W0131 04:01:54.649600 4667 reflector.go:561] object-"metallb-system"/"speaker-dockercfg-h7d84": failed to list *v1.Secret: secrets "speaker-dockercfg-h7d84" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Jan 31 04:01:54 crc kubenswrapper[4667]: E0131 04:01:54.649661 4667 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"speaker-dockercfg-h7d84\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"speaker-dockercfg-h7d84\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 04:01:54 crc kubenswrapper[4667]: E0131 04:01:54.649632 4667 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"speaker-certs-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"speaker-certs-secret\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 04:01:54 crc kubenswrapper[4667]: W0131 04:01:54.649710 4667 reflector.go:561] object-"metallb-system"/"metallb-memberlist": failed to list *v1.Secret: secrets "metallb-memberlist" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Jan 31 04:01:54 crc kubenswrapper[4667]: E0131 04:01:54.649744 4667 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-memberlist\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"metallb-memberlist\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.673008 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-np9hr"] Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.674511 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-np9hr" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.676173 4667 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.684495 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/1d6f4476-1b56-481c-b15e-ec4149642acc-frr-sockets\") pod \"frr-k8s-h45xh\" (UID: \"1d6f4476-1b56-481c-b15e-ec4149642acc\") " pod="metallb-system/frr-k8s-h45xh" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.684567 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phkxj\" (UniqueName: \"kubernetes.io/projected/bf8fd966-64cf-493a-b75c-2588e084afb8-kube-api-access-phkxj\") pod \"frr-k8s-webhook-server-7df86c4f6c-7tjsc\" (UID: \"bf8fd966-64cf-493a-b75c-2588e084afb8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7tjsc" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.684596 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdmv8\" (UniqueName: \"kubernetes.io/projected/1d6f4476-1b56-481c-b15e-ec4149642acc-kube-api-access-hdmv8\") pod \"frr-k8s-h45xh\" (UID: \"1d6f4476-1b56-481c-b15e-ec4149642acc\") " pod="metallb-system/frr-k8s-h45xh" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.684809 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d6f4476-1b56-481c-b15e-ec4149642acc-metrics-certs\") pod \"frr-k8s-h45xh\" (UID: \"1d6f4476-1b56-481c-b15e-ec4149642acc\") " pod="metallb-system/frr-k8s-h45xh" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.684925 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/1d6f4476-1b56-481c-b15e-ec4149642acc-frr-startup\") pod \"frr-k8s-h45xh\" (UID: \"1d6f4476-1b56-481c-b15e-ec4149642acc\") " pod="metallb-system/frr-k8s-h45xh" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.684968 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/1d6f4476-1b56-481c-b15e-ec4149642acc-reloader\") pod \"frr-k8s-h45xh\" (UID: \"1d6f4476-1b56-481c-b15e-ec4149642acc\") " pod="metallb-system/frr-k8s-h45xh" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.685032 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/1d6f4476-1b56-481c-b15e-ec4149642acc-metrics\") pod \"frr-k8s-h45xh\" (UID: \"1d6f4476-1b56-481c-b15e-ec4149642acc\") " pod="metallb-system/frr-k8s-h45xh" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.685108 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf8fd966-64cf-493a-b75c-2588e084afb8-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-7tjsc\" (UID: \"bf8fd966-64cf-493a-b75c-2588e084afb8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7tjsc" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.685148 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/1d6f4476-1b56-481c-b15e-ec4149642acc-frr-conf\") pod \"frr-k8s-h45xh\" (UID: \"1d6f4476-1b56-481c-b15e-ec4149642acc\") " pod="metallb-system/frr-k8s-h45xh" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.696699 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf8fd966-64cf-493a-b75c-2588e084afb8-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-7tjsc\" (UID: \"bf8fd966-64cf-493a-b75c-2588e084afb8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7tjsc" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.698337 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-np9hr"] Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.746911 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phkxj\" (UniqueName: \"kubernetes.io/projected/bf8fd966-64cf-493a-b75c-2588e084afb8-kube-api-access-phkxj\") pod \"frr-k8s-webhook-server-7df86c4f6c-7tjsc\" (UID: \"bf8fd966-64cf-493a-b75c-2588e084afb8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7tjsc" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.786714 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdmv8\" (UniqueName: \"kubernetes.io/projected/1d6f4476-1b56-481c-b15e-ec4149642acc-kube-api-access-hdmv8\") pod \"frr-k8s-h45xh\" (UID: \"1d6f4476-1b56-481c-b15e-ec4149642acc\") " pod="metallb-system/frr-k8s-h45xh" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.786764 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a01baef7-dca0-4217-a1de-cbfcf6348664-metallb-excludel2\") pod \"speaker-tqnx9\" (UID: \"a01baef7-dca0-4217-a1de-cbfcf6348664\") " pod="metallb-system/speaker-tqnx9" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.786801 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/62ebc3a2-c4f8-4b5e-8fd7-1c462453ea77-cert\") pod \"controller-6968d8fdc4-np9hr\" (UID: \"62ebc3a2-c4f8-4b5e-8fd7-1c462453ea77\") " pod="metallb-system/controller-6968d8fdc4-np9hr" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.786819 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbbxp\" (UniqueName: \"kubernetes.io/projected/62ebc3a2-c4f8-4b5e-8fd7-1c462453ea77-kube-api-access-hbbxp\") pod \"controller-6968d8fdc4-np9hr\" (UID: \"62ebc3a2-c4f8-4b5e-8fd7-1c462453ea77\") " pod="metallb-system/controller-6968d8fdc4-np9hr" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.786866 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d6f4476-1b56-481c-b15e-ec4149642acc-metrics-certs\") pod \"frr-k8s-h45xh\" (UID: \"1d6f4476-1b56-481c-b15e-ec4149642acc\") " pod="metallb-system/frr-k8s-h45xh" Jan 31 04:01:54 crc kubenswrapper[4667]: E0131 04:01:54.786976 4667 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.786980 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/62ebc3a2-c4f8-4b5e-8fd7-1c462453ea77-metrics-certs\") pod \"controller-6968d8fdc4-np9hr\" (UID: \"62ebc3a2-c4f8-4b5e-8fd7-1c462453ea77\") " pod="metallb-system/controller-6968d8fdc4-np9hr" Jan 31 04:01:54 crc kubenswrapper[4667]: E0131 04:01:54.787027 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d6f4476-1b56-481c-b15e-ec4149642acc-metrics-certs podName:1d6f4476-1b56-481c-b15e-ec4149642acc nodeName:}" failed. No retries permitted until 2026-01-31 04:01:55.287008503 +0000 UTC m=+838.803343802 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d6f4476-1b56-481c-b15e-ec4149642acc-metrics-certs") pod "frr-k8s-h45xh" (UID: "1d6f4476-1b56-481c-b15e-ec4149642acc") : secret "frr-k8s-certs-secret" not found Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.787077 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/1d6f4476-1b56-481c-b15e-ec4149642acc-frr-startup\") pod \"frr-k8s-h45xh\" (UID: \"1d6f4476-1b56-481c-b15e-ec4149642acc\") " pod="metallb-system/frr-k8s-h45xh" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.787105 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/1d6f4476-1b56-481c-b15e-ec4149642acc-reloader\") pod \"frr-k8s-h45xh\" (UID: \"1d6f4476-1b56-481c-b15e-ec4149642acc\") " pod="metallb-system/frr-k8s-h45xh" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.787165 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/1d6f4476-1b56-481c-b15e-ec4149642acc-metrics\") pod \"frr-k8s-h45xh\" (UID: \"1d6f4476-1b56-481c-b15e-ec4149642acc\") " pod="metallb-system/frr-k8s-h45xh" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.787186 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a01baef7-dca0-4217-a1de-cbfcf6348664-memberlist\") pod \"speaker-tqnx9\" (UID: \"a01baef7-dca0-4217-a1de-cbfcf6348664\") " pod="metallb-system/speaker-tqnx9" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.787217 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a01baef7-dca0-4217-a1de-cbfcf6348664-metrics-certs\") pod \"speaker-tqnx9\" (UID: \"a01baef7-dca0-4217-a1de-cbfcf6348664\") " pod="metallb-system/speaker-tqnx9" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.787265 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/1d6f4476-1b56-481c-b15e-ec4149642acc-frr-conf\") pod \"frr-k8s-h45xh\" (UID: \"1d6f4476-1b56-481c-b15e-ec4149642acc\") " pod="metallb-system/frr-k8s-h45xh" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.787291 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/1d6f4476-1b56-481c-b15e-ec4149642acc-frr-sockets\") pod \"frr-k8s-h45xh\" (UID: \"1d6f4476-1b56-481c-b15e-ec4149642acc\") " pod="metallb-system/frr-k8s-h45xh" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.787321 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wbz6\" (UniqueName: \"kubernetes.io/projected/a01baef7-dca0-4217-a1de-cbfcf6348664-kube-api-access-9wbz6\") pod \"speaker-tqnx9\" (UID: \"a01baef7-dca0-4217-a1de-cbfcf6348664\") " pod="metallb-system/speaker-tqnx9" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.787594 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/1d6f4476-1b56-481c-b15e-ec4149642acc-reloader\") pod \"frr-k8s-h45xh\" (UID: \"1d6f4476-1b56-481c-b15e-ec4149642acc\") " pod="metallb-system/frr-k8s-h45xh" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.787678 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/1d6f4476-1b56-481c-b15e-ec4149642acc-metrics\") pod \"frr-k8s-h45xh\" (UID: \"1d6f4476-1b56-481c-b15e-ec4149642acc\") " pod="metallb-system/frr-k8s-h45xh" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.787830 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/1d6f4476-1b56-481c-b15e-ec4149642acc-frr-conf\") pod \"frr-k8s-h45xh\" (UID: \"1d6f4476-1b56-481c-b15e-ec4149642acc\") " pod="metallb-system/frr-k8s-h45xh" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.787981 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/1d6f4476-1b56-481c-b15e-ec4149642acc-frr-sockets\") pod \"frr-k8s-h45xh\" (UID: \"1d6f4476-1b56-481c-b15e-ec4149642acc\") " pod="metallb-system/frr-k8s-h45xh" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.788312 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/1d6f4476-1b56-481c-b15e-ec4149642acc-frr-startup\") pod \"frr-k8s-h45xh\" (UID: \"1d6f4476-1b56-481c-b15e-ec4149642acc\") " pod="metallb-system/frr-k8s-h45xh" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.813459 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdmv8\" (UniqueName: \"kubernetes.io/projected/1d6f4476-1b56-481c-b15e-ec4149642acc-kube-api-access-hdmv8\") pod \"frr-k8s-h45xh\" (UID: \"1d6f4476-1b56-481c-b15e-ec4149642acc\") " pod="metallb-system/frr-k8s-h45xh" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.827541 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7tjsc" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.889036 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/62ebc3a2-c4f8-4b5e-8fd7-1c462453ea77-cert\") pod \"controller-6968d8fdc4-np9hr\" (UID: \"62ebc3a2-c4f8-4b5e-8fd7-1c462453ea77\") " pod="metallb-system/controller-6968d8fdc4-np9hr" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.889095 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbbxp\" (UniqueName: \"kubernetes.io/projected/62ebc3a2-c4f8-4b5e-8fd7-1c462453ea77-kube-api-access-hbbxp\") pod \"controller-6968d8fdc4-np9hr\" (UID: \"62ebc3a2-c4f8-4b5e-8fd7-1c462453ea77\") " pod="metallb-system/controller-6968d8fdc4-np9hr" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.889134 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/62ebc3a2-c4f8-4b5e-8fd7-1c462453ea77-metrics-certs\") pod \"controller-6968d8fdc4-np9hr\" (UID: \"62ebc3a2-c4f8-4b5e-8fd7-1c462453ea77\") " pod="metallb-system/controller-6968d8fdc4-np9hr" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.889173 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a01baef7-dca0-4217-a1de-cbfcf6348664-memberlist\") pod \"speaker-tqnx9\" (UID: \"a01baef7-dca0-4217-a1de-cbfcf6348664\") " pod="metallb-system/speaker-tqnx9" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.889194 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a01baef7-dca0-4217-a1de-cbfcf6348664-metrics-certs\") pod \"speaker-tqnx9\" (UID: \"a01baef7-dca0-4217-a1de-cbfcf6348664\") " pod="metallb-system/speaker-tqnx9" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.889221 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wbz6\" (UniqueName: \"kubernetes.io/projected/a01baef7-dca0-4217-a1de-cbfcf6348664-kube-api-access-9wbz6\") pod \"speaker-tqnx9\" (UID: \"a01baef7-dca0-4217-a1de-cbfcf6348664\") " pod="metallb-system/speaker-tqnx9" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.889246 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a01baef7-dca0-4217-a1de-cbfcf6348664-metallb-excludel2\") pod \"speaker-tqnx9\" (UID: \"a01baef7-dca0-4217-a1de-cbfcf6348664\") " pod="metallb-system/speaker-tqnx9" Jan 31 04:01:54 crc kubenswrapper[4667]: E0131 04:01:54.890484 4667 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 31 04:01:54 crc kubenswrapper[4667]: E0131 04:01:54.890589 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62ebc3a2-c4f8-4b5e-8fd7-1c462453ea77-metrics-certs podName:62ebc3a2-c4f8-4b5e-8fd7-1c462453ea77 nodeName:}" failed. No retries permitted until 2026-01-31 04:01:55.390570024 +0000 UTC m=+838.906905323 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/62ebc3a2-c4f8-4b5e-8fd7-1c462453ea77-metrics-certs") pod "controller-6968d8fdc4-np9hr" (UID: "62ebc3a2-c4f8-4b5e-8fd7-1c462453ea77") : secret "controller-certs-secret" not found Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.894499 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/62ebc3a2-c4f8-4b5e-8fd7-1c462453ea77-cert\") pod \"controller-6968d8fdc4-np9hr\" (UID: \"62ebc3a2-c4f8-4b5e-8fd7-1c462453ea77\") " pod="metallb-system/controller-6968d8fdc4-np9hr" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.906725 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wbz6\" (UniqueName: \"kubernetes.io/projected/a01baef7-dca0-4217-a1de-cbfcf6348664-kube-api-access-9wbz6\") pod \"speaker-tqnx9\" (UID: \"a01baef7-dca0-4217-a1de-cbfcf6348664\") " pod="metallb-system/speaker-tqnx9" Jan 31 04:01:54 crc kubenswrapper[4667]: I0131 04:01:54.907267 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbbxp\" (UniqueName: \"kubernetes.io/projected/62ebc3a2-c4f8-4b5e-8fd7-1c462453ea77-kube-api-access-hbbxp\") pod \"controller-6968d8fdc4-np9hr\" (UID: \"62ebc3a2-c4f8-4b5e-8fd7-1c462453ea77\") " pod="metallb-system/controller-6968d8fdc4-np9hr" Jan 31 04:01:55 crc kubenswrapper[4667]: I0131 04:01:55.293922 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d6f4476-1b56-481c-b15e-ec4149642acc-metrics-certs\") pod \"frr-k8s-h45xh\" (UID: \"1d6f4476-1b56-481c-b15e-ec4149642acc\") " pod="metallb-system/frr-k8s-h45xh" Jan 31 04:01:55 crc kubenswrapper[4667]: I0131 04:01:55.297785 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d6f4476-1b56-481c-b15e-ec4149642acc-metrics-certs\") pod \"frr-k8s-h45xh\" (UID: \"1d6f4476-1b56-481c-b15e-ec4149642acc\") " pod="metallb-system/frr-k8s-h45xh" Jan 31 04:01:55 crc kubenswrapper[4667]: I0131 04:01:55.309885 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-7tjsc"] Jan 31 04:01:55 crc kubenswrapper[4667]: I0131 04:01:55.395151 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/62ebc3a2-c4f8-4b5e-8fd7-1c462453ea77-metrics-certs\") pod \"controller-6968d8fdc4-np9hr\" (UID: \"62ebc3a2-c4f8-4b5e-8fd7-1c462453ea77\") " pod="metallb-system/controller-6968d8fdc4-np9hr" Jan 31 04:01:55 crc kubenswrapper[4667]: I0131 04:01:55.398507 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/62ebc3a2-c4f8-4b5e-8fd7-1c462453ea77-metrics-certs\") pod \"controller-6968d8fdc4-np9hr\" (UID: \"62ebc3a2-c4f8-4b5e-8fd7-1c462453ea77\") " pod="metallb-system/controller-6968d8fdc4-np9hr" Jan 31 04:01:55 crc kubenswrapper[4667]: I0131 04:01:55.434599 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-h45xh" Jan 31 04:01:55 crc kubenswrapper[4667]: I0131 04:01:55.588120 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-np9hr" Jan 31 04:01:55 crc kubenswrapper[4667]: I0131 04:01:55.688504 4667 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 31 04:01:55 crc kubenswrapper[4667]: I0131 04:01:55.694996 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a01baef7-dca0-4217-a1de-cbfcf6348664-metrics-certs\") pod \"speaker-tqnx9\" (UID: \"a01baef7-dca0-4217-a1de-cbfcf6348664\") " pod="metallb-system/speaker-tqnx9" Jan 31 04:01:55 crc kubenswrapper[4667]: I0131 04:01:55.725041 4667 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-h7d84" Jan 31 04:01:55 crc kubenswrapper[4667]: I0131 04:01:55.744644 4667 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 31 04:01:55 crc kubenswrapper[4667]: E0131 04:01:55.746966 4667 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 31 04:01:55 crc kubenswrapper[4667]: E0131 04:01:55.747062 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a01baef7-dca0-4217-a1de-cbfcf6348664-memberlist podName:a01baef7-dca0-4217-a1de-cbfcf6348664 nodeName:}" failed. No retries permitted until 2026-01-31 04:01:56.247037511 +0000 UTC m=+839.763372810 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/a01baef7-dca0-4217-a1de-cbfcf6348664-memberlist") pod "speaker-tqnx9" (UID: "a01baef7-dca0-4217-a1de-cbfcf6348664") : secret "metallb-memberlist" not found Jan 31 04:01:55 crc kubenswrapper[4667]: I0131 04:01:55.782625 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7tjsc" event={"ID":"bf8fd966-64cf-493a-b75c-2588e084afb8","Type":"ContainerStarted","Data":"114e905966ae6e6af9624e9b77bd3ad7eb731382ef5f12b90e1d4a8802e9df9e"} Jan 31 04:01:55 crc kubenswrapper[4667]: I0131 04:01:55.784380 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-h45xh" event={"ID":"1d6f4476-1b56-481c-b15e-ec4149642acc","Type":"ContainerStarted","Data":"d26fec56fad32d9ff18eab3df39e5822903a2755327109e647159fe5c41c8c14"} Jan 31 04:01:55 crc kubenswrapper[4667]: I0131 04:01:55.827420 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-np9hr"] Jan 31 04:01:55 crc kubenswrapper[4667]: E0131 04:01:55.890019 4667 configmap.go:193] Couldn't get configMap metallb-system/metallb-excludel2: failed to sync configmap cache: timed out waiting for the condition Jan 31 04:01:55 crc kubenswrapper[4667]: E0131 04:01:55.890129 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a01baef7-dca0-4217-a1de-cbfcf6348664-metallb-excludel2 podName:a01baef7-dca0-4217-a1de-cbfcf6348664 nodeName:}" failed. No retries permitted until 2026-01-31 04:01:56.390102497 +0000 UTC m=+839.906437796 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metallb-excludel2" (UniqueName: "kubernetes.io/configmap/a01baef7-dca0-4217-a1de-cbfcf6348664-metallb-excludel2") pod "speaker-tqnx9" (UID: "a01baef7-dca0-4217-a1de-cbfcf6348664") : failed to sync configmap cache: timed out waiting for the condition Jan 31 04:01:56 crc kubenswrapper[4667]: I0131 04:01:56.116727 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 31 04:01:56 crc kubenswrapper[4667]: I0131 04:01:56.305853 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a01baef7-dca0-4217-a1de-cbfcf6348664-memberlist\") pod \"speaker-tqnx9\" (UID: \"a01baef7-dca0-4217-a1de-cbfcf6348664\") " pod="metallb-system/speaker-tqnx9" Jan 31 04:01:56 crc kubenswrapper[4667]: I0131 04:01:56.310571 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a01baef7-dca0-4217-a1de-cbfcf6348664-memberlist\") pod \"speaker-tqnx9\" (UID: \"a01baef7-dca0-4217-a1de-cbfcf6348664\") " pod="metallb-system/speaker-tqnx9" Jan 31 04:01:56 crc kubenswrapper[4667]: I0131 04:01:56.407814 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a01baef7-dca0-4217-a1de-cbfcf6348664-metallb-excludel2\") pod \"speaker-tqnx9\" (UID: \"a01baef7-dca0-4217-a1de-cbfcf6348664\") " pod="metallb-system/speaker-tqnx9" Jan 31 04:01:56 crc kubenswrapper[4667]: I0131 04:01:56.409028 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a01baef7-dca0-4217-a1de-cbfcf6348664-metallb-excludel2\") pod \"speaker-tqnx9\" (UID: \"a01baef7-dca0-4217-a1de-cbfcf6348664\") " pod="metallb-system/speaker-tqnx9" Jan 31 04:01:56 crc kubenswrapper[4667]: I0131 04:01:56.462365 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-tqnx9" Jan 31 04:01:56 crc kubenswrapper[4667]: I0131 04:01:56.823764 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-np9hr" event={"ID":"62ebc3a2-c4f8-4b5e-8fd7-1c462453ea77","Type":"ContainerStarted","Data":"6f8990c19fc745136e6573aeb6ad2fc5f9c0cdd6f979945eb173dfcd695b4930"} Jan 31 04:01:56 crc kubenswrapper[4667]: I0131 04:01:56.824132 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-np9hr" event={"ID":"62ebc3a2-c4f8-4b5e-8fd7-1c462453ea77","Type":"ContainerStarted","Data":"07e5d94e4a059fcc25b0a89f2b69016f1fac1586a6c10d99eea4f314ae975270"} Jan 31 04:01:56 crc kubenswrapper[4667]: I0131 04:01:56.824147 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-np9hr" event={"ID":"62ebc3a2-c4f8-4b5e-8fd7-1c462453ea77","Type":"ContainerStarted","Data":"1319fcc830fbc30c15a6795eabb3e8e44f31facde65e8de34464983a89e609ef"} Jan 31 04:01:56 crc kubenswrapper[4667]: I0131 04:01:56.824939 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-np9hr" Jan 31 04:01:56 crc kubenswrapper[4667]: I0131 04:01:56.830521 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-tqnx9" event={"ID":"a01baef7-dca0-4217-a1de-cbfcf6348664","Type":"ContainerStarted","Data":"54ab1575eb17b73f1a259cb9682142e60f9fd0ae796115c202c908293e3540f5"} Jan 31 04:01:56 crc kubenswrapper[4667]: I0131 04:01:56.830571 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-tqnx9" event={"ID":"a01baef7-dca0-4217-a1de-cbfcf6348664","Type":"ContainerStarted","Data":"2bd3d75b3d24eb7caaef6314ebcdaee3d4df937f0b85c3cd86fe7bca27a02444"} Jan 31 04:01:56 crc kubenswrapper[4667]: I0131 04:01:56.859690 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-np9hr" podStartSLOduration=2.859668108 podStartE2EDuration="2.859668108s" podCreationTimestamp="2026-01-31 04:01:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:01:56.853903225 +0000 UTC m=+840.370238544" watchObservedRunningTime="2026-01-31 04:01:56.859668108 +0000 UTC m=+840.376003407" Jan 31 04:01:57 crc kubenswrapper[4667]: I0131 04:01:57.847307 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-tqnx9" event={"ID":"a01baef7-dca0-4217-a1de-cbfcf6348664","Type":"ContainerStarted","Data":"3b3b9c6dfffc7f136e19749d76b512142e5173bc28a32119e5168c878dd607b3"} Jan 31 04:01:57 crc kubenswrapper[4667]: I0131 04:01:57.847876 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-tqnx9" Jan 31 04:01:57 crc kubenswrapper[4667]: I0131 04:01:57.872946 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-tqnx9" podStartSLOduration=3.872926515 podStartE2EDuration="3.872926515s" podCreationTimestamp="2026-01-31 04:01:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:01:57.871037595 +0000 UTC m=+841.387372894" watchObservedRunningTime="2026-01-31 04:01:57.872926515 +0000 UTC m=+841.389261814" Jan 31 04:02:04 crc kubenswrapper[4667]: I0131 04:02:04.904619 4667 generic.go:334] "Generic (PLEG): container finished" podID="1d6f4476-1b56-481c-b15e-ec4149642acc" containerID="ed07e82dfcc601ccb9ac2353d0a3cf96e435026e988a1a35248ad66d7bd403f7" exitCode=0 Jan 31 04:02:04 crc kubenswrapper[4667]: I0131 04:02:04.904689 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-h45xh" event={"ID":"1d6f4476-1b56-481c-b15e-ec4149642acc","Type":"ContainerDied","Data":"ed07e82dfcc601ccb9ac2353d0a3cf96e435026e988a1a35248ad66d7bd403f7"} Jan 31 04:02:04 crc kubenswrapper[4667]: I0131 04:02:04.908562 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7tjsc" event={"ID":"bf8fd966-64cf-493a-b75c-2588e084afb8","Type":"ContainerStarted","Data":"7eec9ee7cd72256ff892fb895652fab388f992ae853a190e61dd65bd407722c2"} Jan 31 04:02:04 crc kubenswrapper[4667]: I0131 04:02:04.908768 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7tjsc" Jan 31 04:02:04 crc kubenswrapper[4667]: I0131 04:02:04.945243 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7tjsc" podStartSLOduration=2.089318705 podStartE2EDuration="10.945224234s" podCreationTimestamp="2026-01-31 04:01:54 +0000 UTC" firstStartedPulling="2026-01-31 04:01:55.34233857 +0000 UTC m=+838.858673869" lastFinishedPulling="2026-01-31 04:02:04.198244099 +0000 UTC m=+847.714579398" observedRunningTime="2026-01-31 04:02:04.943901899 +0000 UTC m=+848.460237188" watchObservedRunningTime="2026-01-31 04:02:04.945224234 +0000 UTC m=+848.461559533" Jan 31 04:02:05 crc kubenswrapper[4667]: I0131 04:02:05.594153 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-np9hr" Jan 31 04:02:05 crc kubenswrapper[4667]: I0131 04:02:05.935426 4667 generic.go:334] "Generic (PLEG): container finished" podID="1d6f4476-1b56-481c-b15e-ec4149642acc" containerID="26a87ae954dece469d634c2c026d84889a07b9fda1f5008a6651492e7befd428" exitCode=0 Jan 31 04:02:05 crc kubenswrapper[4667]: I0131 04:02:05.935495 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-h45xh" event={"ID":"1d6f4476-1b56-481c-b15e-ec4149642acc","Type":"ContainerDied","Data":"26a87ae954dece469d634c2c026d84889a07b9fda1f5008a6651492e7befd428"} Jan 31 04:02:06 crc kubenswrapper[4667]: I0131 04:02:06.466893 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-tqnx9" Jan 31 04:02:06 crc kubenswrapper[4667]: I0131 04:02:06.944323 4667 generic.go:334] "Generic (PLEG): container finished" podID="1d6f4476-1b56-481c-b15e-ec4149642acc" containerID="53566635b3b51b4952db912699d75bb84aad13c9bdd37f4ee185942a50df79fc" exitCode=0 Jan 31 04:02:06 crc kubenswrapper[4667]: I0131 04:02:06.944381 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-h45xh" event={"ID":"1d6f4476-1b56-481c-b15e-ec4149642acc","Type":"ContainerDied","Data":"53566635b3b51b4952db912699d75bb84aad13c9bdd37f4ee185942a50df79fc"} Jan 31 04:02:07 crc kubenswrapper[4667]: I0131 04:02:07.970105 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-h45xh" event={"ID":"1d6f4476-1b56-481c-b15e-ec4149642acc","Type":"ContainerStarted","Data":"ee48b534653250f2bea98e6dbb7992f481b1ddb78d6140a179c7c4bebb89379e"} Jan 31 04:02:07 crc kubenswrapper[4667]: I0131 04:02:07.970697 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-h45xh" event={"ID":"1d6f4476-1b56-481c-b15e-ec4149642acc","Type":"ContainerStarted","Data":"dbb64a27d656aa635b2cd6347957baed6c79cbf1d9d9f0151be39c6e801fadc9"} Jan 31 04:02:07 crc kubenswrapper[4667]: I0131 04:02:07.970732 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-h45xh" event={"ID":"1d6f4476-1b56-481c-b15e-ec4149642acc","Type":"ContainerStarted","Data":"a527d61eb0a2672389a7da295d506b94b3fc3829504706c006cc4c6f30ad8b79"} Jan 31 04:02:07 crc kubenswrapper[4667]: I0131 04:02:07.970751 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-h45xh" event={"ID":"1d6f4476-1b56-481c-b15e-ec4149642acc","Type":"ContainerStarted","Data":"c1e19563297d72990525044dec26e53801570d47338be11e4e96cfa4b40f16fb"} Jan 31 04:02:07 crc kubenswrapper[4667]: I0131 04:02:07.970767 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-h45xh" event={"ID":"1d6f4476-1b56-481c-b15e-ec4149642acc","Type":"ContainerStarted","Data":"302c14a1d2808ef43469873e15df8e8c69af6422375457c8e762a4bf616bd1a0"} Jan 31 04:02:08 crc kubenswrapper[4667]: I0131 04:02:08.984254 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-h45xh" event={"ID":"1d6f4476-1b56-481c-b15e-ec4149642acc","Type":"ContainerStarted","Data":"f5884f6bf21cd6299cac5f88c0017a0c46f4577a8d4a8395ac3284ebcd9e1792"} Jan 31 04:02:08 crc kubenswrapper[4667]: I0131 04:02:08.984883 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-h45xh" Jan 31 04:02:09 crc kubenswrapper[4667]: I0131 04:02:09.015511 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-h45xh" podStartSLOduration=6.3941050090000005 podStartE2EDuration="15.015488741s" podCreationTimestamp="2026-01-31 04:01:54 +0000 UTC" firstStartedPulling="2026-01-31 04:01:55.559655202 +0000 UTC m=+839.075990501" lastFinishedPulling="2026-01-31 04:02:04.181038944 +0000 UTC m=+847.697374233" observedRunningTime="2026-01-31 04:02:09.015302396 +0000 UTC m=+852.531637745" watchObservedRunningTime="2026-01-31 04:02:09.015488741 +0000 UTC m=+852.531824040" Jan 31 04:02:09 crc kubenswrapper[4667]: I0131 04:02:09.290392 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-bzzwf"] Jan 31 04:02:09 crc kubenswrapper[4667]: I0131 04:02:09.291498 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bzzwf" Jan 31 04:02:09 crc kubenswrapper[4667]: I0131 04:02:09.295114 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-2w9z6" Jan 31 04:02:09 crc kubenswrapper[4667]: I0131 04:02:09.358214 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 31 04:02:09 crc kubenswrapper[4667]: I0131 04:02:09.365483 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 31 04:02:09 crc kubenswrapper[4667]: I0131 04:02:09.416664 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-bzzwf"] Jan 31 04:02:09 crc kubenswrapper[4667]: I0131 04:02:09.468086 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqj6n\" (UniqueName: \"kubernetes.io/projected/e69f9a6d-919b-41e5-bc35-93fa0b3fdc01-kube-api-access-hqj6n\") pod \"openstack-operator-index-bzzwf\" (UID: \"e69f9a6d-919b-41e5-bc35-93fa0b3fdc01\") " pod="openstack-operators/openstack-operator-index-bzzwf" Jan 31 04:02:09 crc kubenswrapper[4667]: I0131 04:02:09.569646 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqj6n\" (UniqueName: \"kubernetes.io/projected/e69f9a6d-919b-41e5-bc35-93fa0b3fdc01-kube-api-access-hqj6n\") pod \"openstack-operator-index-bzzwf\" (UID: \"e69f9a6d-919b-41e5-bc35-93fa0b3fdc01\") " pod="openstack-operators/openstack-operator-index-bzzwf" Jan 31 04:02:09 crc kubenswrapper[4667]: I0131 04:02:09.591183 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqj6n\" (UniqueName: \"kubernetes.io/projected/e69f9a6d-919b-41e5-bc35-93fa0b3fdc01-kube-api-access-hqj6n\") pod \"openstack-operator-index-bzzwf\" (UID: \"e69f9a6d-919b-41e5-bc35-93fa0b3fdc01\") " pod="openstack-operators/openstack-operator-index-bzzwf" Jan 31 04:02:09 crc kubenswrapper[4667]: I0131 04:02:09.714186 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bzzwf" Jan 31 04:02:09 crc kubenswrapper[4667]: I0131 04:02:09.963530 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-bzzwf"] Jan 31 04:02:09 crc kubenswrapper[4667]: I0131 04:02:09.994986 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bzzwf" event={"ID":"e69f9a6d-919b-41e5-bc35-93fa0b3fdc01","Type":"ContainerStarted","Data":"90e1a4fe6c63386af974142a979d3f58ec462cf0c6ba6d461acfc303d0ac2d16"} Jan 31 04:02:10 crc kubenswrapper[4667]: I0131 04:02:10.435626 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-h45xh" Jan 31 04:02:10 crc kubenswrapper[4667]: I0131 04:02:10.496924 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-h45xh" Jan 31 04:02:12 crc kubenswrapper[4667]: I0131 04:02:12.652301 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-bzzwf"] Jan 31 04:02:13 crc kubenswrapper[4667]: I0131 04:02:13.016365 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bzzwf" event={"ID":"e69f9a6d-919b-41e5-bc35-93fa0b3fdc01","Type":"ContainerStarted","Data":"10bda29d7b0c05a4984de7f20b2d1c532bb9de83a6ffb1d77d1947c7a549377d"} Jan 31 04:02:13 crc kubenswrapper[4667]: I0131 04:02:13.016516 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-bzzwf" podUID="e69f9a6d-919b-41e5-bc35-93fa0b3fdc01" containerName="registry-server" containerID="cri-o://10bda29d7b0c05a4984de7f20b2d1c532bb9de83a6ffb1d77d1947c7a549377d" gracePeriod=2 Jan 31 04:02:13 crc kubenswrapper[4667]: I0131 04:02:13.035160 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-bzzwf" podStartSLOduration=1.279388697 podStartE2EDuration="4.035140029s" podCreationTimestamp="2026-01-31 04:02:09 +0000 UTC" firstStartedPulling="2026-01-31 04:02:09.977817762 +0000 UTC m=+853.494153061" lastFinishedPulling="2026-01-31 04:02:12.733569094 +0000 UTC m=+856.249904393" observedRunningTime="2026-01-31 04:02:13.032214002 +0000 UTC m=+856.548549321" watchObservedRunningTime="2026-01-31 04:02:13.035140029 +0000 UTC m=+856.551475328" Jan 31 04:02:13 crc kubenswrapper[4667]: I0131 04:02:13.266157 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-zhdt9"] Jan 31 04:02:13 crc kubenswrapper[4667]: I0131 04:02:13.266961 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zhdt9" Jan 31 04:02:13 crc kubenswrapper[4667]: I0131 04:02:13.312964 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-zhdt9"] Jan 31 04:02:13 crc kubenswrapper[4667]: I0131 04:02:13.322380 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv4cv\" (UniqueName: \"kubernetes.io/projected/28215a5c-c908-41e3-b138-1b26eaab9121-kube-api-access-xv4cv\") pod \"openstack-operator-index-zhdt9\" (UID: \"28215a5c-c908-41e3-b138-1b26eaab9121\") " pod="openstack-operators/openstack-operator-index-zhdt9" Jan 31 04:02:13 crc kubenswrapper[4667]: I0131 04:02:13.404999 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bzzwf" Jan 31 04:02:13 crc kubenswrapper[4667]: I0131 04:02:13.424123 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv4cv\" (UniqueName: \"kubernetes.io/projected/28215a5c-c908-41e3-b138-1b26eaab9121-kube-api-access-xv4cv\") pod \"openstack-operator-index-zhdt9\" (UID: \"28215a5c-c908-41e3-b138-1b26eaab9121\") " pod="openstack-operators/openstack-operator-index-zhdt9" Jan 31 04:02:13 crc kubenswrapper[4667]: I0131 04:02:13.453998 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv4cv\" (UniqueName: \"kubernetes.io/projected/28215a5c-c908-41e3-b138-1b26eaab9121-kube-api-access-xv4cv\") pod \"openstack-operator-index-zhdt9\" (UID: \"28215a5c-c908-41e3-b138-1b26eaab9121\") " pod="openstack-operators/openstack-operator-index-zhdt9" Jan 31 04:02:13 crc kubenswrapper[4667]: I0131 04:02:13.525774 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqj6n\" (UniqueName: \"kubernetes.io/projected/e69f9a6d-919b-41e5-bc35-93fa0b3fdc01-kube-api-access-hqj6n\") pod \"e69f9a6d-919b-41e5-bc35-93fa0b3fdc01\" (UID: \"e69f9a6d-919b-41e5-bc35-93fa0b3fdc01\") " Jan 31 04:02:13 crc kubenswrapper[4667]: I0131 04:02:13.529230 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e69f9a6d-919b-41e5-bc35-93fa0b3fdc01-kube-api-access-hqj6n" (OuterVolumeSpecName: "kube-api-access-hqj6n") pod "e69f9a6d-919b-41e5-bc35-93fa0b3fdc01" (UID: "e69f9a6d-919b-41e5-bc35-93fa0b3fdc01"). InnerVolumeSpecName "kube-api-access-hqj6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:02:13 crc kubenswrapper[4667]: I0131 04:02:13.608919 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zhdt9" Jan 31 04:02:13 crc kubenswrapper[4667]: I0131 04:02:13.627418 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqj6n\" (UniqueName: \"kubernetes.io/projected/e69f9a6d-919b-41e5-bc35-93fa0b3fdc01-kube-api-access-hqj6n\") on node \"crc\" DevicePath \"\"" Jan 31 04:02:13 crc kubenswrapper[4667]: I0131 04:02:13.811749 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-zhdt9"] Jan 31 04:02:13 crc kubenswrapper[4667]: W0131 04:02:13.818584 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28215a5c_c908_41e3_b138_1b26eaab9121.slice/crio-e98288890c6acd0e8062298d9361a0a1eed47e96a1f516cfb19d32eb5b182fe1 WatchSource:0}: Error finding container e98288890c6acd0e8062298d9361a0a1eed47e96a1f516cfb19d32eb5b182fe1: Status 404 returned error can't find the container with id e98288890c6acd0e8062298d9361a0a1eed47e96a1f516cfb19d32eb5b182fe1 Jan 31 04:02:14 crc kubenswrapper[4667]: I0131 04:02:14.027252 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zhdt9" event={"ID":"28215a5c-c908-41e3-b138-1b26eaab9121","Type":"ContainerStarted","Data":"e98288890c6acd0e8062298d9361a0a1eed47e96a1f516cfb19d32eb5b182fe1"} Jan 31 04:02:14 crc kubenswrapper[4667]: I0131 04:02:14.028880 4667 generic.go:334] "Generic (PLEG): container finished" podID="e69f9a6d-919b-41e5-bc35-93fa0b3fdc01" containerID="10bda29d7b0c05a4984de7f20b2d1c532bb9de83a6ffb1d77d1947c7a549377d" exitCode=0 Jan 31 04:02:14 crc kubenswrapper[4667]: I0131 04:02:14.028940 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bzzwf" event={"ID":"e69f9a6d-919b-41e5-bc35-93fa0b3fdc01","Type":"ContainerDied","Data":"10bda29d7b0c05a4984de7f20b2d1c532bb9de83a6ffb1d77d1947c7a549377d"} Jan 31 04:02:14 crc kubenswrapper[4667]: I0131 04:02:14.028963 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bzzwf" Jan 31 04:02:14 crc kubenswrapper[4667]: I0131 04:02:14.028978 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bzzwf" event={"ID":"e69f9a6d-919b-41e5-bc35-93fa0b3fdc01","Type":"ContainerDied","Data":"90e1a4fe6c63386af974142a979d3f58ec462cf0c6ba6d461acfc303d0ac2d16"} Jan 31 04:02:14 crc kubenswrapper[4667]: I0131 04:02:14.029007 4667 scope.go:117] "RemoveContainer" containerID="10bda29d7b0c05a4984de7f20b2d1c532bb9de83a6ffb1d77d1947c7a549377d" Jan 31 04:02:14 crc kubenswrapper[4667]: I0131 04:02:14.054697 4667 scope.go:117] "RemoveContainer" containerID="10bda29d7b0c05a4984de7f20b2d1c532bb9de83a6ffb1d77d1947c7a549377d" Jan 31 04:02:14 crc kubenswrapper[4667]: E0131 04:02:14.055661 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10bda29d7b0c05a4984de7f20b2d1c532bb9de83a6ffb1d77d1947c7a549377d\": container with ID starting with 10bda29d7b0c05a4984de7f20b2d1c532bb9de83a6ffb1d77d1947c7a549377d not found: ID does not exist" containerID="10bda29d7b0c05a4984de7f20b2d1c532bb9de83a6ffb1d77d1947c7a549377d" Jan 31 04:02:14 crc kubenswrapper[4667]: I0131 04:02:14.055737 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10bda29d7b0c05a4984de7f20b2d1c532bb9de83a6ffb1d77d1947c7a549377d"} err="failed to get container status \"10bda29d7b0c05a4984de7f20b2d1c532bb9de83a6ffb1d77d1947c7a549377d\": rpc error: code = NotFound desc = could not find container \"10bda29d7b0c05a4984de7f20b2d1c532bb9de83a6ffb1d77d1947c7a549377d\": container with ID starting with 10bda29d7b0c05a4984de7f20b2d1c532bb9de83a6ffb1d77d1947c7a549377d not found: ID does not exist" Jan 31 04:02:14 crc kubenswrapper[4667]: I0131 04:02:14.087569 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-bzzwf"] Jan 31 04:02:14 crc kubenswrapper[4667]: I0131 04:02:14.090956 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-bzzwf"] Jan 31 04:02:14 crc kubenswrapper[4667]: I0131 04:02:14.833306 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7tjsc" Jan 31 04:02:15 crc kubenswrapper[4667]: I0131 04:02:15.040061 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zhdt9" event={"ID":"28215a5c-c908-41e3-b138-1b26eaab9121","Type":"ContainerStarted","Data":"2b892a6c5d75ef2f835280f0b4e56d4a67fe778c7b88b7772b61784be66ff052"} Jan 31 04:02:15 crc kubenswrapper[4667]: I0131 04:02:15.060365 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-zhdt9" podStartSLOduration=1.913620421 podStartE2EDuration="2.060337931s" podCreationTimestamp="2026-01-31 04:02:13 +0000 UTC" firstStartedPulling="2026-01-31 04:02:13.823464139 +0000 UTC m=+857.339799448" lastFinishedPulling="2026-01-31 04:02:13.970181649 +0000 UTC m=+857.486516958" observedRunningTime="2026-01-31 04:02:15.05803248 +0000 UTC m=+858.574367779" watchObservedRunningTime="2026-01-31 04:02:15.060337931 +0000 UTC m=+858.576673230" Jan 31 04:02:15 crc kubenswrapper[4667]: I0131 04:02:15.292879 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e69f9a6d-919b-41e5-bc35-93fa0b3fdc01" path="/var/lib/kubelet/pods/e69f9a6d-919b-41e5-bc35-93fa0b3fdc01/volumes" Jan 31 04:02:15 crc kubenswrapper[4667]: I0131 04:02:15.704492 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:02:15 crc kubenswrapper[4667]: I0131 04:02:15.705086 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:02:15 crc kubenswrapper[4667]: I0131 04:02:15.705153 4667 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" Jan 31 04:02:15 crc kubenswrapper[4667]: I0131 04:02:15.706238 4667 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1e53b0068c5af26480719e1ae76b8eb2cdae9fcbfa4d0840e77aebecf0501325"} pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 04:02:15 crc kubenswrapper[4667]: I0131 04:02:15.706319 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" containerID="cri-o://1e53b0068c5af26480719e1ae76b8eb2cdae9fcbfa4d0840e77aebecf0501325" gracePeriod=600 Jan 31 04:02:16 crc kubenswrapper[4667]: I0131 04:02:16.052122 4667 generic.go:334] "Generic (PLEG): container finished" podID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerID="1e53b0068c5af26480719e1ae76b8eb2cdae9fcbfa4d0840e77aebecf0501325" exitCode=0 Jan 31 04:02:16 crc kubenswrapper[4667]: I0131 04:02:16.052224 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" event={"ID":"b103bbd2-fb5d-4b2a-8b01-c32f699757df","Type":"ContainerDied","Data":"1e53b0068c5af26480719e1ae76b8eb2cdae9fcbfa4d0840e77aebecf0501325"} Jan 31 04:02:16 crc kubenswrapper[4667]: I0131 04:02:16.052333 4667 scope.go:117] "RemoveContainer" containerID="51d7a751b57a412d9d741ee969c521abf7aeca931e7ee615449f180a3fa0af59" Jan 31 04:02:17 crc kubenswrapper[4667]: I0131 04:02:17.063781 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" event={"ID":"b103bbd2-fb5d-4b2a-8b01-c32f699757df","Type":"ContainerStarted","Data":"a2768bea3b08958c54e155e8f29b14218602ccc55cf630ccf4d7736c3b3b12ec"} Jan 31 04:02:23 crc kubenswrapper[4667]: I0131 04:02:23.610184 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-zhdt9" Jan 31 04:02:23 crc kubenswrapper[4667]: I0131 04:02:23.611919 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-zhdt9" Jan 31 04:02:23 crc kubenswrapper[4667]: I0131 04:02:23.639463 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-zhdt9" Jan 31 04:02:24 crc kubenswrapper[4667]: I0131 04:02:24.151125 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-zhdt9" Jan 31 04:02:25 crc kubenswrapper[4667]: I0131 04:02:25.440578 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-h45xh" Jan 31 04:02:25 crc kubenswrapper[4667]: I0131 04:02:25.707296 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/caeab9432d2d716ddc2226985785c2befc1b94ceca4ba368762fb3e0362hwqr"] Jan 31 04:02:25 crc kubenswrapper[4667]: E0131 04:02:25.707715 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e69f9a6d-919b-41e5-bc35-93fa0b3fdc01" containerName="registry-server" Jan 31 04:02:25 crc kubenswrapper[4667]: I0131 04:02:25.707737 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="e69f9a6d-919b-41e5-bc35-93fa0b3fdc01" containerName="registry-server" Jan 31 04:02:25 crc kubenswrapper[4667]: I0131 04:02:25.707999 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="e69f9a6d-919b-41e5-bc35-93fa0b3fdc01" containerName="registry-server" Jan 31 04:02:25 crc kubenswrapper[4667]: I0131 04:02:25.709339 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/caeab9432d2d716ddc2226985785c2befc1b94ceca4ba368762fb3e0362hwqr" Jan 31 04:02:25 crc kubenswrapper[4667]: I0131 04:02:25.716574 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-mghwq" Jan 31 04:02:25 crc kubenswrapper[4667]: I0131 04:02:25.742519 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/caeab9432d2d716ddc2226985785c2befc1b94ceca4ba368762fb3e0362hwqr"] Jan 31 04:02:25 crc kubenswrapper[4667]: I0131 04:02:25.827555 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/770c15b8-5980-4cf9-91c7-11b2ded11b60-util\") pod \"caeab9432d2d716ddc2226985785c2befc1b94ceca4ba368762fb3e0362hwqr\" (UID: \"770c15b8-5980-4cf9-91c7-11b2ded11b60\") " pod="openstack-operators/caeab9432d2d716ddc2226985785c2befc1b94ceca4ba368762fb3e0362hwqr" Jan 31 04:02:25 crc kubenswrapper[4667]: I0131 04:02:25.827625 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc86f\" (UniqueName: \"kubernetes.io/projected/770c15b8-5980-4cf9-91c7-11b2ded11b60-kube-api-access-vc86f\") pod \"caeab9432d2d716ddc2226985785c2befc1b94ceca4ba368762fb3e0362hwqr\" (UID: \"770c15b8-5980-4cf9-91c7-11b2ded11b60\") " pod="openstack-operators/caeab9432d2d716ddc2226985785c2befc1b94ceca4ba368762fb3e0362hwqr" Jan 31 04:02:25 crc kubenswrapper[4667]: I0131 04:02:25.827664 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/770c15b8-5980-4cf9-91c7-11b2ded11b60-bundle\") pod \"caeab9432d2d716ddc2226985785c2befc1b94ceca4ba368762fb3e0362hwqr\" (UID: \"770c15b8-5980-4cf9-91c7-11b2ded11b60\") " pod="openstack-operators/caeab9432d2d716ddc2226985785c2befc1b94ceca4ba368762fb3e0362hwqr" Jan 31 04:02:25 crc kubenswrapper[4667]: I0131 04:02:25.929214 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/770c15b8-5980-4cf9-91c7-11b2ded11b60-util\") pod \"caeab9432d2d716ddc2226985785c2befc1b94ceca4ba368762fb3e0362hwqr\" (UID: \"770c15b8-5980-4cf9-91c7-11b2ded11b60\") " pod="openstack-operators/caeab9432d2d716ddc2226985785c2befc1b94ceca4ba368762fb3e0362hwqr" Jan 31 04:02:25 crc kubenswrapper[4667]: I0131 04:02:25.929320 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc86f\" (UniqueName: \"kubernetes.io/projected/770c15b8-5980-4cf9-91c7-11b2ded11b60-kube-api-access-vc86f\") pod \"caeab9432d2d716ddc2226985785c2befc1b94ceca4ba368762fb3e0362hwqr\" (UID: \"770c15b8-5980-4cf9-91c7-11b2ded11b60\") " pod="openstack-operators/caeab9432d2d716ddc2226985785c2befc1b94ceca4ba368762fb3e0362hwqr" Jan 31 04:02:25 crc kubenswrapper[4667]: I0131 04:02:25.929380 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/770c15b8-5980-4cf9-91c7-11b2ded11b60-bundle\") pod \"caeab9432d2d716ddc2226985785c2befc1b94ceca4ba368762fb3e0362hwqr\" (UID: \"770c15b8-5980-4cf9-91c7-11b2ded11b60\") " pod="openstack-operators/caeab9432d2d716ddc2226985785c2befc1b94ceca4ba368762fb3e0362hwqr" Jan 31 04:02:25 crc kubenswrapper[4667]: I0131 04:02:25.930195 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/770c15b8-5980-4cf9-91c7-11b2ded11b60-bundle\") pod \"caeab9432d2d716ddc2226985785c2befc1b94ceca4ba368762fb3e0362hwqr\" (UID: \"770c15b8-5980-4cf9-91c7-11b2ded11b60\") " pod="openstack-operators/caeab9432d2d716ddc2226985785c2befc1b94ceca4ba368762fb3e0362hwqr" Jan 31 04:02:25 crc kubenswrapper[4667]: I0131 04:02:25.930202 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/770c15b8-5980-4cf9-91c7-11b2ded11b60-util\") pod \"caeab9432d2d716ddc2226985785c2befc1b94ceca4ba368762fb3e0362hwqr\" (UID: \"770c15b8-5980-4cf9-91c7-11b2ded11b60\") " pod="openstack-operators/caeab9432d2d716ddc2226985785c2befc1b94ceca4ba368762fb3e0362hwqr" Jan 31 04:02:25 crc kubenswrapper[4667]: I0131 04:02:25.963492 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc86f\" (UniqueName: \"kubernetes.io/projected/770c15b8-5980-4cf9-91c7-11b2ded11b60-kube-api-access-vc86f\") pod \"caeab9432d2d716ddc2226985785c2befc1b94ceca4ba368762fb3e0362hwqr\" (UID: \"770c15b8-5980-4cf9-91c7-11b2ded11b60\") " pod="openstack-operators/caeab9432d2d716ddc2226985785c2befc1b94ceca4ba368762fb3e0362hwqr" Jan 31 04:02:26 crc kubenswrapper[4667]: I0131 04:02:26.028434 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/caeab9432d2d716ddc2226985785c2befc1b94ceca4ba368762fb3e0362hwqr" Jan 31 04:02:26 crc kubenswrapper[4667]: I0131 04:02:26.445798 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/caeab9432d2d716ddc2226985785c2befc1b94ceca4ba368762fb3e0362hwqr"] Jan 31 04:02:26 crc kubenswrapper[4667]: W0131 04:02:26.455292 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod770c15b8_5980_4cf9_91c7_11b2ded11b60.slice/crio-d1715215bb68e9db72ddee8eaf03b9f647e16fd68641830a97dca35945644668 WatchSource:0}: Error finding container d1715215bb68e9db72ddee8eaf03b9f647e16fd68641830a97dca35945644668: Status 404 returned error can't find the container with id d1715215bb68e9db72ddee8eaf03b9f647e16fd68641830a97dca35945644668 Jan 31 04:02:27 crc kubenswrapper[4667]: I0131 04:02:27.140531 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/caeab9432d2d716ddc2226985785c2befc1b94ceca4ba368762fb3e0362hwqr" event={"ID":"770c15b8-5980-4cf9-91c7-11b2ded11b60","Type":"ContainerStarted","Data":"d1715215bb68e9db72ddee8eaf03b9f647e16fd68641830a97dca35945644668"} Jan 31 04:02:28 crc kubenswrapper[4667]: I0131 04:02:28.148864 4667 generic.go:334] "Generic (PLEG): container finished" podID="770c15b8-5980-4cf9-91c7-11b2ded11b60" containerID="5f687a1f9183678ebe5e4217ebdd1d8da6babc80854ae0b3b714c110e31efbe8" exitCode=0 Jan 31 04:02:28 crc kubenswrapper[4667]: I0131 04:02:28.148950 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/caeab9432d2d716ddc2226985785c2befc1b94ceca4ba368762fb3e0362hwqr" event={"ID":"770c15b8-5980-4cf9-91c7-11b2ded11b60","Type":"ContainerDied","Data":"5f687a1f9183678ebe5e4217ebdd1d8da6babc80854ae0b3b714c110e31efbe8"} Jan 31 04:02:29 crc kubenswrapper[4667]: I0131 04:02:29.159693 4667 generic.go:334] "Generic (PLEG): container finished" podID="770c15b8-5980-4cf9-91c7-11b2ded11b60" containerID="0c24872a0853f295353700dd4a55bd32e4e4e30bbbbcf68aa7296143f21222c9" exitCode=0 Jan 31 04:02:29 crc kubenswrapper[4667]: I0131 04:02:29.159961 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/caeab9432d2d716ddc2226985785c2befc1b94ceca4ba368762fb3e0362hwqr" event={"ID":"770c15b8-5980-4cf9-91c7-11b2ded11b60","Type":"ContainerDied","Data":"0c24872a0853f295353700dd4a55bd32e4e4e30bbbbcf68aa7296143f21222c9"} Jan 31 04:02:30 crc kubenswrapper[4667]: I0131 04:02:30.170126 4667 generic.go:334] "Generic (PLEG): container finished" podID="770c15b8-5980-4cf9-91c7-11b2ded11b60" containerID="1e6a2638e6cd22bdd9d8b3bb874c6cb74fd652edf028330f95ed595c11065047" exitCode=0 Jan 31 04:02:30 crc kubenswrapper[4667]: I0131 04:02:30.170773 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/caeab9432d2d716ddc2226985785c2befc1b94ceca4ba368762fb3e0362hwqr" event={"ID":"770c15b8-5980-4cf9-91c7-11b2ded11b60","Type":"ContainerDied","Data":"1e6a2638e6cd22bdd9d8b3bb874c6cb74fd652edf028330f95ed595c11065047"} Jan 31 04:02:31 crc kubenswrapper[4667]: I0131 04:02:31.503703 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/caeab9432d2d716ddc2226985785c2befc1b94ceca4ba368762fb3e0362hwqr" Jan 31 04:02:31 crc kubenswrapper[4667]: I0131 04:02:31.631062 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/770c15b8-5980-4cf9-91c7-11b2ded11b60-util\") pod \"770c15b8-5980-4cf9-91c7-11b2ded11b60\" (UID: \"770c15b8-5980-4cf9-91c7-11b2ded11b60\") " Jan 31 04:02:31 crc kubenswrapper[4667]: I0131 04:02:31.631168 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/770c15b8-5980-4cf9-91c7-11b2ded11b60-bundle\") pod \"770c15b8-5980-4cf9-91c7-11b2ded11b60\" (UID: \"770c15b8-5980-4cf9-91c7-11b2ded11b60\") " Jan 31 04:02:31 crc kubenswrapper[4667]: I0131 04:02:31.631382 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vc86f\" (UniqueName: \"kubernetes.io/projected/770c15b8-5980-4cf9-91c7-11b2ded11b60-kube-api-access-vc86f\") pod \"770c15b8-5980-4cf9-91c7-11b2ded11b60\" (UID: \"770c15b8-5980-4cf9-91c7-11b2ded11b60\") " Jan 31 04:02:31 crc kubenswrapper[4667]: I0131 04:02:31.632228 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/770c15b8-5980-4cf9-91c7-11b2ded11b60-bundle" (OuterVolumeSpecName: "bundle") pod "770c15b8-5980-4cf9-91c7-11b2ded11b60" (UID: "770c15b8-5980-4cf9-91c7-11b2ded11b60"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:02:31 crc kubenswrapper[4667]: I0131 04:02:31.641122 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/770c15b8-5980-4cf9-91c7-11b2ded11b60-kube-api-access-vc86f" (OuterVolumeSpecName: "kube-api-access-vc86f") pod "770c15b8-5980-4cf9-91c7-11b2ded11b60" (UID: "770c15b8-5980-4cf9-91c7-11b2ded11b60"). InnerVolumeSpecName "kube-api-access-vc86f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:02:31 crc kubenswrapper[4667]: I0131 04:02:31.648907 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/770c15b8-5980-4cf9-91c7-11b2ded11b60-util" (OuterVolumeSpecName: "util") pod "770c15b8-5980-4cf9-91c7-11b2ded11b60" (UID: "770c15b8-5980-4cf9-91c7-11b2ded11b60"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:02:31 crc kubenswrapper[4667]: I0131 04:02:31.733580 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vc86f\" (UniqueName: \"kubernetes.io/projected/770c15b8-5980-4cf9-91c7-11b2ded11b60-kube-api-access-vc86f\") on node \"crc\" DevicePath \"\"" Jan 31 04:02:31 crc kubenswrapper[4667]: I0131 04:02:31.733692 4667 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/770c15b8-5980-4cf9-91c7-11b2ded11b60-util\") on node \"crc\" DevicePath \"\"" Jan 31 04:02:31 crc kubenswrapper[4667]: I0131 04:02:31.733720 4667 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/770c15b8-5980-4cf9-91c7-11b2ded11b60-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:02:32 crc kubenswrapper[4667]: I0131 04:02:32.192226 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/caeab9432d2d716ddc2226985785c2befc1b94ceca4ba368762fb3e0362hwqr" event={"ID":"770c15b8-5980-4cf9-91c7-11b2ded11b60","Type":"ContainerDied","Data":"d1715215bb68e9db72ddee8eaf03b9f647e16fd68641830a97dca35945644668"} Jan 31 04:02:32 crc kubenswrapper[4667]: I0131 04:02:32.192286 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1715215bb68e9db72ddee8eaf03b9f647e16fd68641830a97dca35945644668" Jan 31 04:02:32 crc kubenswrapper[4667]: I0131 04:02:32.192348 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/caeab9432d2d716ddc2226985785c2befc1b94ceca4ba368762fb3e0362hwqr" Jan 31 04:02:38 crc kubenswrapper[4667]: I0131 04:02:38.347164 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-77f687fc99-5bq4z"] Jan 31 04:02:38 crc kubenswrapper[4667]: E0131 04:02:38.347905 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="770c15b8-5980-4cf9-91c7-11b2ded11b60" containerName="util" Jan 31 04:02:38 crc kubenswrapper[4667]: I0131 04:02:38.347917 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="770c15b8-5980-4cf9-91c7-11b2ded11b60" containerName="util" Jan 31 04:02:38 crc kubenswrapper[4667]: E0131 04:02:38.347931 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="770c15b8-5980-4cf9-91c7-11b2ded11b60" containerName="pull" Jan 31 04:02:38 crc kubenswrapper[4667]: I0131 04:02:38.347938 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="770c15b8-5980-4cf9-91c7-11b2ded11b60" containerName="pull" Jan 31 04:02:38 crc kubenswrapper[4667]: E0131 04:02:38.347949 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="770c15b8-5980-4cf9-91c7-11b2ded11b60" containerName="extract" Jan 31 04:02:38 crc kubenswrapper[4667]: I0131 04:02:38.347954 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="770c15b8-5980-4cf9-91c7-11b2ded11b60" containerName="extract" Jan 31 04:02:38 crc kubenswrapper[4667]: I0131 04:02:38.348063 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="770c15b8-5980-4cf9-91c7-11b2ded11b60" containerName="extract" Jan 31 04:02:38 crc kubenswrapper[4667]: I0131 04:02:38.348494 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-77f687fc99-5bq4z" Jan 31 04:02:38 crc kubenswrapper[4667]: I0131 04:02:38.351010 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-hkkfj" Jan 31 04:02:38 crc kubenswrapper[4667]: I0131 04:02:38.384493 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-77f687fc99-5bq4z"] Jan 31 04:02:38 crc kubenswrapper[4667]: I0131 04:02:38.456762 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rc54\" (UniqueName: \"kubernetes.io/projected/2b9c9fa2-4838-4c78-bcab-9bc723279049-kube-api-access-7rc54\") pod \"openstack-operator-controller-init-77f687fc99-5bq4z\" (UID: \"2b9c9fa2-4838-4c78-bcab-9bc723279049\") " pod="openstack-operators/openstack-operator-controller-init-77f687fc99-5bq4z" Jan 31 04:02:38 crc kubenswrapper[4667]: I0131 04:02:38.557981 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rc54\" (UniqueName: \"kubernetes.io/projected/2b9c9fa2-4838-4c78-bcab-9bc723279049-kube-api-access-7rc54\") pod \"openstack-operator-controller-init-77f687fc99-5bq4z\" (UID: \"2b9c9fa2-4838-4c78-bcab-9bc723279049\") " pod="openstack-operators/openstack-operator-controller-init-77f687fc99-5bq4z" Jan 31 04:02:38 crc kubenswrapper[4667]: I0131 04:02:38.602066 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rc54\" (UniqueName: \"kubernetes.io/projected/2b9c9fa2-4838-4c78-bcab-9bc723279049-kube-api-access-7rc54\") pod \"openstack-operator-controller-init-77f687fc99-5bq4z\" (UID: \"2b9c9fa2-4838-4c78-bcab-9bc723279049\") " pod="openstack-operators/openstack-operator-controller-init-77f687fc99-5bq4z" Jan 31 04:02:38 crc kubenswrapper[4667]: I0131 04:02:38.669297 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-77f687fc99-5bq4z" Jan 31 04:02:39 crc kubenswrapper[4667]: I0131 04:02:39.148234 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-77f687fc99-5bq4z"] Jan 31 04:02:39 crc kubenswrapper[4667]: I0131 04:02:39.255280 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-77f687fc99-5bq4z" event={"ID":"2b9c9fa2-4838-4c78-bcab-9bc723279049","Type":"ContainerStarted","Data":"139fda49a28aea5ca24a11707d083c4ee028e7c164a074f41d8867610ba695dc"} Jan 31 04:02:44 crc kubenswrapper[4667]: I0131 04:02:44.293057 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-77f687fc99-5bq4z" event={"ID":"2b9c9fa2-4838-4c78-bcab-9bc723279049","Type":"ContainerStarted","Data":"e4876e2171f3d397b885e4dc5428f96b4f88f8f6dcb356def70dc2bac6341ca1"} Jan 31 04:02:44 crc kubenswrapper[4667]: I0131 04:02:44.294062 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-77f687fc99-5bq4z" Jan 31 04:02:44 crc kubenswrapper[4667]: I0131 04:02:44.338701 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-77f687fc99-5bq4z" podStartSLOduration=1.6913671300000002 podStartE2EDuration="6.338675109s" podCreationTimestamp="2026-01-31 04:02:38 +0000 UTC" firstStartedPulling="2026-01-31 04:02:39.165667117 +0000 UTC m=+882.682002416" lastFinishedPulling="2026-01-31 04:02:43.812975096 +0000 UTC m=+887.329310395" observedRunningTime="2026-01-31 04:02:44.329154287 +0000 UTC m=+887.845489586" watchObservedRunningTime="2026-01-31 04:02:44.338675109 +0000 UTC m=+887.855010408" Jan 31 04:02:48 crc kubenswrapper[4667]: I0131 04:02:48.671703 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-77f687fc99-5bq4z" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.172496 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-2xtdf"] Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.193633 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-pqxkg"] Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.193879 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-2xtdf" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.195327 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-54fc54694b-t88kx"] Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.195888 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-pqxkg"] Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.195914 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-2xtdf"] Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.196072 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-pqxkg" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.196388 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-54fc54694b-t88kx" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.199220 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-j629c"] Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.217067 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-kln2n" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.217337 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-sdjlb" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.217545 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-snvdv" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.219886 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-j629c" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.221096 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr5sz\" (UniqueName: \"kubernetes.io/projected/75fa830b-0948-4104-874f-332cb2ea9de2-kube-api-access-vr5sz\") pod \"barbican-operator-controller-manager-54fc54694b-t88kx\" (UID: \"75fa830b-0948-4104-874f-332cb2ea9de2\") " pod="openstack-operators/barbican-operator-controller-manager-54fc54694b-t88kx" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.224488 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-v5874" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.244911 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-j629c"] Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.285228 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-54fc54694b-t88kx"] Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.289098 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-lzb8l"] Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.290393 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lzb8l" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.292518 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-wxcv5" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.301344 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-lzb8l"] Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.316939 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-rfpnc"] Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.318306 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-rfpnc" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.320932 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-fzr6t" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.324980 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmmx6\" (UniqueName: \"kubernetes.io/projected/5280851f-6404-45ad-adc7-f41479cb7dc3-kube-api-access-mmmx6\") pod \"glance-operator-controller-manager-8886f4c47-j629c\" (UID: \"5280851f-6404-45ad-adc7-f41479cb7dc3\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-j629c" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.325170 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4gjl\" (UniqueName: \"kubernetes.io/projected/f26454ff-c920-4240-84dd-684272f0c0c8-kube-api-access-r4gjl\") pod \"horizon-operator-controller-manager-5fb775575f-rfpnc\" (UID: \"f26454ff-c920-4240-84dd-684272f0c0c8\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-rfpnc" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.325214 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr5sz\" (UniqueName: \"kubernetes.io/projected/75fa830b-0948-4104-874f-332cb2ea9de2-kube-api-access-vr5sz\") pod \"barbican-operator-controller-manager-54fc54694b-t88kx\" (UID: \"75fa830b-0948-4104-874f-332cb2ea9de2\") " pod="openstack-operators/barbican-operator-controller-manager-54fc54694b-t88kx" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.325283 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xrt5\" (UniqueName: \"kubernetes.io/projected/508d212d-ccda-471c-94aa-96955a519e5a-kube-api-access-7xrt5\") pod \"cinder-operator-controller-manager-8d874c8fc-pqxkg\" (UID: \"508d212d-ccda-471c-94aa-96955a519e5a\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-pqxkg" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.325310 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zrgp\" (UniqueName: \"kubernetes.io/projected/5743730b-079b-4b07-a87b-932cd637e387-kube-api-access-2zrgp\") pod \"designate-operator-controller-manager-6d9697b7f4-2xtdf\" (UID: \"5743730b-079b-4b07-a87b-932cd637e387\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-2xtdf" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.325336 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bvsg\" (UniqueName: \"kubernetes.io/projected/cfe9238d-7457-43f4-9933-cece048fc3fe-kube-api-access-8bvsg\") pod \"heat-operator-controller-manager-69d6db494d-lzb8l\" (UID: \"cfe9238d-7457-43f4-9933-cece048fc3fe\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lzb8l" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.369564 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr5sz\" (UniqueName: \"kubernetes.io/projected/75fa830b-0948-4104-874f-332cb2ea9de2-kube-api-access-vr5sz\") pod \"barbican-operator-controller-manager-54fc54694b-t88kx\" (UID: \"75fa830b-0948-4104-874f-332cb2ea9de2\") " pod="openstack-operators/barbican-operator-controller-manager-54fc54694b-t88kx" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.379920 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-zswlt"] Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.380872 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-zswlt" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.384163 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-4hg2j" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.384334 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.395975 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-rfpnc"] Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.421332 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-zswlt"] Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.430667 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmmx6\" (UniqueName: \"kubernetes.io/projected/5280851f-6404-45ad-adc7-f41479cb7dc3-kube-api-access-mmmx6\") pod \"glance-operator-controller-manager-8886f4c47-j629c\" (UID: \"5280851f-6404-45ad-adc7-f41479cb7dc3\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-j629c" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.430730 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4gjl\" (UniqueName: \"kubernetes.io/projected/f26454ff-c920-4240-84dd-684272f0c0c8-kube-api-access-r4gjl\") pod \"horizon-operator-controller-manager-5fb775575f-rfpnc\" (UID: \"f26454ff-c920-4240-84dd-684272f0c0c8\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-rfpnc" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.430805 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xrt5\" (UniqueName: \"kubernetes.io/projected/508d212d-ccda-471c-94aa-96955a519e5a-kube-api-access-7xrt5\") pod \"cinder-operator-controller-manager-8d874c8fc-pqxkg\" (UID: \"508d212d-ccda-471c-94aa-96955a519e5a\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-pqxkg" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.430828 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zrgp\" (UniqueName: \"kubernetes.io/projected/5743730b-079b-4b07-a87b-932cd637e387-kube-api-access-2zrgp\") pod \"designate-operator-controller-manager-6d9697b7f4-2xtdf\" (UID: \"5743730b-079b-4b07-a87b-932cd637e387\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-2xtdf" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.430880 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bvsg\" (UniqueName: \"kubernetes.io/projected/cfe9238d-7457-43f4-9933-cece048fc3fe-kube-api-access-8bvsg\") pod \"heat-operator-controller-manager-69d6db494d-lzb8l\" (UID: \"cfe9238d-7457-43f4-9933-cece048fc3fe\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lzb8l" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.430979 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-6cf9z"] Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.431923 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-6cf9z" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.440928 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-6cf9z"] Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.447017 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-hxstt"] Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.448074 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-hxstt" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.449496 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-hxstt"] Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.460994 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-w6vd6"] Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.461986 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-w6vd6" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.479913 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-w6vd6"] Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.484513 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-9qlq5" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.485325 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-wmmkk"] Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.486297 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-wmmkk" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.533388 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47cf710a-e856-4094-8ef8-ff115631a236-cert\") pod \"infra-operator-controller-manager-79955696d6-zswlt\" (UID: \"47cf710a-e856-4094-8ef8-ff115631a236\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zswlt" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.533433 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb7sn\" (UniqueName: \"kubernetes.io/projected/5108f978-fa68-4add-9f97-5e02aec8c688-kube-api-access-zb7sn\") pod \"ironic-operator-controller-manager-5f4b8bd54d-6cf9z\" (UID: \"5108f978-fa68-4add-9f97-5e02aec8c688\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-6cf9z" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.533505 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgfrm\" (UniqueName: \"kubernetes.io/projected/f955fd59-24f1-42bb-81a8-c17e32274291-kube-api-access-qgfrm\") pod \"mariadb-operator-controller-manager-67bf948998-wmmkk\" (UID: \"f955fd59-24f1-42bb-81a8-c17e32274291\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-wmmkk" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.533536 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctph8\" (UniqueName: \"kubernetes.io/projected/47cf710a-e856-4094-8ef8-ff115631a236-kube-api-access-ctph8\") pod \"infra-operator-controller-manager-79955696d6-zswlt\" (UID: \"47cf710a-e856-4094-8ef8-ff115631a236\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zswlt" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.533594 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x6v5\" (UniqueName: \"kubernetes.io/projected/8a4eab04-25a1-4da9-8ee1-0243d4b69073-kube-api-access-7x6v5\") pod \"keystone-operator-controller-manager-84f48565d4-hxstt\" (UID: \"8a4eab04-25a1-4da9-8ee1-0243d4b69073\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-hxstt" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.533739 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgs2q\" (UniqueName: \"kubernetes.io/projected/ed4fdc84-4fc5-4e5a-8959-b5ea977c9b56-kube-api-access-hgs2q\") pod \"manila-operator-controller-manager-7dd968899f-w6vd6\" (UID: \"ed4fdc84-4fc5-4e5a-8959-b5ea977c9b56\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-w6vd6" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.536175 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-xfzb5" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.536412 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-6gsmc" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.536631 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-jsv9l" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.548799 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zrgp\" (UniqueName: \"kubernetes.io/projected/5743730b-079b-4b07-a87b-932cd637e387-kube-api-access-2zrgp\") pod \"designate-operator-controller-manager-6d9697b7f4-2xtdf\" (UID: \"5743730b-079b-4b07-a87b-932cd637e387\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-2xtdf" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.553906 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-wmmkk"] Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.553968 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmmx6\" (UniqueName: \"kubernetes.io/projected/5280851f-6404-45ad-adc7-f41479cb7dc3-kube-api-access-mmmx6\") pod \"glance-operator-controller-manager-8886f4c47-j629c\" (UID: \"5280851f-6404-45ad-adc7-f41479cb7dc3\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-j629c" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.555883 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4gjl\" (UniqueName: \"kubernetes.io/projected/f26454ff-c920-4240-84dd-684272f0c0c8-kube-api-access-r4gjl\") pod \"horizon-operator-controller-manager-5fb775575f-rfpnc\" (UID: \"f26454ff-c920-4240-84dd-684272f0c0c8\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-rfpnc" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.556268 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-2xtdf" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.566467 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bvsg\" (UniqueName: \"kubernetes.io/projected/cfe9238d-7457-43f4-9933-cece048fc3fe-kube-api-access-8bvsg\") pod \"heat-operator-controller-manager-69d6db494d-lzb8l\" (UID: \"cfe9238d-7457-43f4-9933-cece048fc3fe\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lzb8l" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.571256 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-kf59p"] Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.572119 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-kf59p" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.588349 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-jx9nf" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.589294 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-2zj6j"] Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.590384 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-2zj6j" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.595955 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xrt5\" (UniqueName: \"kubernetes.io/projected/508d212d-ccda-471c-94aa-96955a519e5a-kube-api-access-7xrt5\") pod \"cinder-operator-controller-manager-8d874c8fc-pqxkg\" (UID: \"508d212d-ccda-471c-94aa-96955a519e5a\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-pqxkg" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.600503 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-vqqjv" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.617772 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-54fc54694b-t88kx" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.644120 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-j629c" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.644192 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47cf710a-e856-4094-8ef8-ff115631a236-cert\") pod \"infra-operator-controller-manager-79955696d6-zswlt\" (UID: \"47cf710a-e856-4094-8ef8-ff115631a236\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zswlt" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.644220 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb7sn\" (UniqueName: \"kubernetes.io/projected/5108f978-fa68-4add-9f97-5e02aec8c688-kube-api-access-zb7sn\") pod \"ironic-operator-controller-manager-5f4b8bd54d-6cf9z\" (UID: \"5108f978-fa68-4add-9f97-5e02aec8c688\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-6cf9z" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.644256 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgfrm\" (UniqueName: \"kubernetes.io/projected/f955fd59-24f1-42bb-81a8-c17e32274291-kube-api-access-qgfrm\") pod \"mariadb-operator-controller-manager-67bf948998-wmmkk\" (UID: \"f955fd59-24f1-42bb-81a8-c17e32274291\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-wmmkk" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.644279 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jvzq\" (UniqueName: \"kubernetes.io/projected/645ed22c-c54e-495c-af4d-a63635f01dbc-kube-api-access-6jvzq\") pod \"octavia-operator-controller-manager-6687f8d877-2zj6j\" (UID: \"645ed22c-c54e-495c-af4d-a63635f01dbc\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-2zj6j" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.644302 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctph8\" (UniqueName: \"kubernetes.io/projected/47cf710a-e856-4094-8ef8-ff115631a236-kube-api-access-ctph8\") pod \"infra-operator-controller-manager-79955696d6-zswlt\" (UID: \"47cf710a-e856-4094-8ef8-ff115631a236\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zswlt" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.644324 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swq9g\" (UniqueName: \"kubernetes.io/projected/1af3e556-130c-4530-89de-dd64852193c8-kube-api-access-swq9g\") pod \"neutron-operator-controller-manager-585dbc889-kf59p\" (UID: \"1af3e556-130c-4530-89de-dd64852193c8\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-kf59p" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.644352 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x6v5\" (UniqueName: \"kubernetes.io/projected/8a4eab04-25a1-4da9-8ee1-0243d4b69073-kube-api-access-7x6v5\") pod \"keystone-operator-controller-manager-84f48565d4-hxstt\" (UID: \"8a4eab04-25a1-4da9-8ee1-0243d4b69073\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-hxstt" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.644370 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgs2q\" (UniqueName: \"kubernetes.io/projected/ed4fdc84-4fc5-4e5a-8959-b5ea977c9b56-kube-api-access-hgs2q\") pod \"manila-operator-controller-manager-7dd968899f-w6vd6\" (UID: \"ed4fdc84-4fc5-4e5a-8959-b5ea977c9b56\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-w6vd6" Jan 31 04:03:08 crc kubenswrapper[4667]: E0131 04:03:08.644639 4667 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 04:03:08 crc kubenswrapper[4667]: E0131 04:03:08.644719 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47cf710a-e856-4094-8ef8-ff115631a236-cert podName:47cf710a-e856-4094-8ef8-ff115631a236 nodeName:}" failed. No retries permitted until 2026-01-31 04:03:09.144695094 +0000 UTC m=+912.661030393 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/47cf710a-e856-4094-8ef8-ff115631a236-cert") pod "infra-operator-controller-manager-79955696d6-zswlt" (UID: "47cf710a-e856-4094-8ef8-ff115631a236") : secret "infra-operator-webhook-server-cert" not found Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.645996 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-kf59p"] Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.661085 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lzb8l" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.688286 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-2zj6j"] Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.691517 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgfrm\" (UniqueName: \"kubernetes.io/projected/f955fd59-24f1-42bb-81a8-c17e32274291-kube-api-access-qgfrm\") pod \"mariadb-operator-controller-manager-67bf948998-wmmkk\" (UID: \"f955fd59-24f1-42bb-81a8-c17e32274291\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-wmmkk" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.695585 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgs2q\" (UniqueName: \"kubernetes.io/projected/ed4fdc84-4fc5-4e5a-8959-b5ea977c9b56-kube-api-access-hgs2q\") pod \"manila-operator-controller-manager-7dd968899f-w6vd6\" (UID: \"ed4fdc84-4fc5-4e5a-8959-b5ea977c9b56\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-w6vd6" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.697237 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-rfpnc" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.705540 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb7sn\" (UniqueName: \"kubernetes.io/projected/5108f978-fa68-4add-9f97-5e02aec8c688-kube-api-access-zb7sn\") pod \"ironic-operator-controller-manager-5f4b8bd54d-6cf9z\" (UID: \"5108f978-fa68-4add-9f97-5e02aec8c688\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-6cf9z" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.706699 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x6v5\" (UniqueName: \"kubernetes.io/projected/8a4eab04-25a1-4da9-8ee1-0243d4b69073-kube-api-access-7x6v5\") pod \"keystone-operator-controller-manager-84f48565d4-hxstt\" (UID: \"8a4eab04-25a1-4da9-8ee1-0243d4b69073\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-hxstt" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.734595 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-zq7nc"] Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.738960 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-zq7nc" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.742791 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-f495s" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.745070 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swq9g\" (UniqueName: \"kubernetes.io/projected/1af3e556-130c-4530-89de-dd64852193c8-kube-api-access-swq9g\") pod \"neutron-operator-controller-manager-585dbc889-kf59p\" (UID: \"1af3e556-130c-4530-89de-dd64852193c8\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-kf59p" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.745101 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xm9w\" (UniqueName: \"kubernetes.io/projected/4dd3097d-038b-459b-be09-25e6a9c28379-kube-api-access-7xm9w\") pod \"nova-operator-controller-manager-55bff696bd-zq7nc\" (UID: \"4dd3097d-038b-459b-be09-25e6a9c28379\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-zq7nc" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.745195 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jvzq\" (UniqueName: \"kubernetes.io/projected/645ed22c-c54e-495c-af4d-a63635f01dbc-kube-api-access-6jvzq\") pod \"octavia-operator-controller-manager-6687f8d877-2zj6j\" (UID: \"645ed22c-c54e-495c-af4d-a63635f01dbc\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-2zj6j" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.749519 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctph8\" (UniqueName: \"kubernetes.io/projected/47cf710a-e856-4094-8ef8-ff115631a236-kube-api-access-ctph8\") pod \"infra-operator-controller-manager-79955696d6-zswlt\" (UID: \"47cf710a-e856-4094-8ef8-ff115631a236\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zswlt" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.759148 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d697dw"] Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.759962 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d697dw" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.770618 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-cpntm" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.770806 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.774255 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-hxstt" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.779920 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-zq7nc"] Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.797125 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-vpt7r"] Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.798135 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-vpt7r" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.808813 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-plkxj" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.853204 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xm9w\" (UniqueName: \"kubernetes.io/projected/4dd3097d-038b-459b-be09-25e6a9c28379-kube-api-access-7xm9w\") pod \"nova-operator-controller-manager-55bff696bd-zq7nc\" (UID: \"4dd3097d-038b-459b-be09-25e6a9c28379\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-zq7nc" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.867035 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-6cf9z" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.867325 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-w6vd6" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.871653 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-pqxkg" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.912641 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xm9w\" (UniqueName: \"kubernetes.io/projected/4dd3097d-038b-459b-be09-25e6a9c28379-kube-api-access-7xm9w\") pod \"nova-operator-controller-manager-55bff696bd-zq7nc\" (UID: \"4dd3097d-038b-459b-be09-25e6a9c28379\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-zq7nc" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.917279 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d697dw"] Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.958612 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ad7389f5-4d9e-4a91-89b8-8f65e425fe83-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d697dw\" (UID: \"ad7389f5-4d9e-4a91-89b8-8f65e425fe83\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d697dw" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.987184 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88ph2\" (UniqueName: \"kubernetes.io/projected/ad7389f5-4d9e-4a91-89b8-8f65e425fe83-kube-api-access-88ph2\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d697dw\" (UID: \"ad7389f5-4d9e-4a91-89b8-8f65e425fe83\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d697dw" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.987304 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gcwf\" (UniqueName: \"kubernetes.io/projected/4955e603-5ae1-4c59-8f06-7e4c3f1cae70-kube-api-access-9gcwf\") pod \"ovn-operator-controller-manager-788c46999f-vpt7r\" (UID: \"4955e603-5ae1-4c59-8f06-7e4c3f1cae70\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-vpt7r" Jan 31 04:03:08 crc kubenswrapper[4667]: I0131 04:03:08.961578 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-wmmkk" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.006364 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swq9g\" (UniqueName: \"kubernetes.io/projected/1af3e556-130c-4530-89de-dd64852193c8-kube-api-access-swq9g\") pod \"neutron-operator-controller-manager-585dbc889-kf59p\" (UID: \"1af3e556-130c-4530-89de-dd64852193c8\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-kf59p" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.033721 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jvzq\" (UniqueName: \"kubernetes.io/projected/645ed22c-c54e-495c-af4d-a63635f01dbc-kube-api-access-6jvzq\") pod \"octavia-operator-controller-manager-6687f8d877-2zj6j\" (UID: \"645ed22c-c54e-495c-af4d-a63635f01dbc\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-2zj6j" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.053017 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-lgk8x"] Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.054612 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-lgk8x" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.057756 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-8blhh" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.059711 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-zq7nc" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.094358 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-lgk8x"] Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.097257 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gcwf\" (UniqueName: \"kubernetes.io/projected/4955e603-5ae1-4c59-8f06-7e4c3f1cae70-kube-api-access-9gcwf\") pod \"ovn-operator-controller-manager-788c46999f-vpt7r\" (UID: \"4955e603-5ae1-4c59-8f06-7e4c3f1cae70\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-vpt7r" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.097332 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ad7389f5-4d9e-4a91-89b8-8f65e425fe83-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d697dw\" (UID: \"ad7389f5-4d9e-4a91-89b8-8f65e425fe83\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d697dw" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.097361 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lhps\" (UniqueName: \"kubernetes.io/projected/aa7cd74d-218f-47a1-80f6-db8e475b1ba0-kube-api-access-7lhps\") pod \"placement-operator-controller-manager-5b964cf4cd-lgk8x\" (UID: \"aa7cd74d-218f-47a1-80f6-db8e475b1ba0\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-lgk8x" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.097427 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88ph2\" (UniqueName: \"kubernetes.io/projected/ad7389f5-4d9e-4a91-89b8-8f65e425fe83-kube-api-access-88ph2\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d697dw\" (UID: \"ad7389f5-4d9e-4a91-89b8-8f65e425fe83\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d697dw" Jan 31 04:03:09 crc kubenswrapper[4667]: E0131 04:03:09.097994 4667 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 04:03:09 crc kubenswrapper[4667]: E0131 04:03:09.098034 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad7389f5-4d9e-4a91-89b8-8f65e425fe83-cert podName:ad7389f5-4d9e-4a91-89b8-8f65e425fe83 nodeName:}" failed. No retries permitted until 2026-01-31 04:03:09.598020954 +0000 UTC m=+913.114356253 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ad7389f5-4d9e-4a91-89b8-8f65e425fe83-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d697dw" (UID: "ad7389f5-4d9e-4a91-89b8-8f65e425fe83") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.127711 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-x46bg"] Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.141240 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-x46bg" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.151246 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-b4btm"] Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.152182 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-b4btm" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.152763 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-wnjjh" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.153105 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gcwf\" (UniqueName: \"kubernetes.io/projected/4955e603-5ae1-4c59-8f06-7e4c3f1cae70-kube-api-access-9gcwf\") pod \"ovn-operator-controller-manager-788c46999f-vpt7r\" (UID: \"4955e603-5ae1-4c59-8f06-7e4c3f1cae70\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-vpt7r" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.154430 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88ph2\" (UniqueName: \"kubernetes.io/projected/ad7389f5-4d9e-4a91-89b8-8f65e425fe83-kube-api-access-88ph2\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d697dw\" (UID: \"ad7389f5-4d9e-4a91-89b8-8f65e425fe83\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d697dw" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.158513 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-r86ht" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.183896 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-vpt7r"] Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.199631 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8gtz\" (UniqueName: \"kubernetes.io/projected/fc224c93-299f-4f99-b16d-64ab47cb66a8-kube-api-access-r8gtz\") pod \"swift-operator-controller-manager-68fc8c869-x46bg\" (UID: \"fc224c93-299f-4f99-b16d-64ab47cb66a8\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-x46bg" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.199674 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnmvh\" (UniqueName: \"kubernetes.io/projected/7c71998a-5e4c-461c-96f9-3ff67b4619cd-kube-api-access-hnmvh\") pod \"telemetry-operator-controller-manager-64b5b76f97-b4btm\" (UID: \"7c71998a-5e4c-461c-96f9-3ff67b4619cd\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-b4btm" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.199739 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lhps\" (UniqueName: \"kubernetes.io/projected/aa7cd74d-218f-47a1-80f6-db8e475b1ba0-kube-api-access-7lhps\") pod \"placement-operator-controller-manager-5b964cf4cd-lgk8x\" (UID: \"aa7cd74d-218f-47a1-80f6-db8e475b1ba0\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-lgk8x" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.199773 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47cf710a-e856-4094-8ef8-ff115631a236-cert\") pod \"infra-operator-controller-manager-79955696d6-zswlt\" (UID: \"47cf710a-e856-4094-8ef8-ff115631a236\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zswlt" Jan 31 04:03:09 crc kubenswrapper[4667]: E0131 04:03:09.199923 4667 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 04:03:09 crc kubenswrapper[4667]: E0131 04:03:09.199976 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47cf710a-e856-4094-8ef8-ff115631a236-cert podName:47cf710a-e856-4094-8ef8-ff115631a236 nodeName:}" failed. No retries permitted until 2026-01-31 04:03:10.19995662 +0000 UTC m=+913.716291919 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/47cf710a-e856-4094-8ef8-ff115631a236-cert") pod "infra-operator-controller-manager-79955696d6-zswlt" (UID: "47cf710a-e856-4094-8ef8-ff115631a236") : secret "infra-operator-webhook-server-cert" not found Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.203019 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-vpt7r" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.230939 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-b4btm"] Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.235300 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lhps\" (UniqueName: \"kubernetes.io/projected/aa7cd74d-218f-47a1-80f6-db8e475b1ba0-kube-api-access-7lhps\") pod \"placement-operator-controller-manager-5b964cf4cd-lgk8x\" (UID: \"aa7cd74d-218f-47a1-80f6-db8e475b1ba0\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-lgk8x" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.245593 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-x46bg"] Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.263181 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-kf59p" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.270060 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-gzj6r"] Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.279038 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-gzj6r" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.286486 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-9lzbf" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.297569 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-2zj6j" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.302607 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8gtz\" (UniqueName: \"kubernetes.io/projected/fc224c93-299f-4f99-b16d-64ab47cb66a8-kube-api-access-r8gtz\") pod \"swift-operator-controller-manager-68fc8c869-x46bg\" (UID: \"fc224c93-299f-4f99-b16d-64ab47cb66a8\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-x46bg" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.302656 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnmvh\" (UniqueName: \"kubernetes.io/projected/7c71998a-5e4c-461c-96f9-3ff67b4619cd-kube-api-access-hnmvh\") pod \"telemetry-operator-controller-manager-64b5b76f97-b4btm\" (UID: \"7c71998a-5e4c-461c-96f9-3ff67b4619cd\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-b4btm" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.318898 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-fxzcm"] Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.322757 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-gzj6r"] Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.322887 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-fxzcm" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.326454 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-jmhbz" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.340677 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnmvh\" (UniqueName: \"kubernetes.io/projected/7c71998a-5e4c-461c-96f9-3ff67b4619cd-kube-api-access-hnmvh\") pod \"telemetry-operator-controller-manager-64b5b76f97-b4btm\" (UID: \"7c71998a-5e4c-461c-96f9-3ff67b4619cd\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-b4btm" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.360508 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8gtz\" (UniqueName: \"kubernetes.io/projected/fc224c93-299f-4f99-b16d-64ab47cb66a8-kube-api-access-r8gtz\") pod \"swift-operator-controller-manager-68fc8c869-x46bg\" (UID: \"fc224c93-299f-4f99-b16d-64ab47cb66a8\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-x46bg" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.374178 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-fxzcm"] Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.405260 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntphh\" (UniqueName: \"kubernetes.io/projected/340a909d-7419-4721-be11-2c37a3a87022-kube-api-access-ntphh\") pod \"test-operator-controller-manager-56f8bfcd9f-gzj6r\" (UID: \"340a909d-7419-4721-be11-2c37a3a87022\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-gzj6r" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.436427 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-lgk8x" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.491488 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-x46bg" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.508802 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntphh\" (UniqueName: \"kubernetes.io/projected/340a909d-7419-4721-be11-2c37a3a87022-kube-api-access-ntphh\") pod \"test-operator-controller-manager-56f8bfcd9f-gzj6r\" (UID: \"340a909d-7419-4721-be11-2c37a3a87022\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-gzj6r" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.509164 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg6z4\" (UniqueName: \"kubernetes.io/projected/675da051-c9cc-4817-9092-478b3d90d1bf-kube-api-access-rg6z4\") pod \"watcher-operator-controller-manager-564965969-fxzcm\" (UID: \"675da051-c9cc-4817-9092-478b3d90d1bf\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-fxzcm" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.526153 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-b4btm" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.537749 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntphh\" (UniqueName: \"kubernetes.io/projected/340a909d-7419-4721-be11-2c37a3a87022-kube-api-access-ntphh\") pod \"test-operator-controller-manager-56f8bfcd9f-gzj6r\" (UID: \"340a909d-7419-4721-be11-2c37a3a87022\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-gzj6r" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.552919 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-fcd7f5fc5-pfnrd"] Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.553867 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-fcd7f5fc5-pfnrd" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.558462 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-97rz4" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.558706 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.558855 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.583199 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-fcd7f5fc5-pfnrd"] Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.611074 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ad7389f5-4d9e-4a91-89b8-8f65e425fe83-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d697dw\" (UID: \"ad7389f5-4d9e-4a91-89b8-8f65e425fe83\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d697dw" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.611225 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg6z4\" (UniqueName: \"kubernetes.io/projected/675da051-c9cc-4817-9092-478b3d90d1bf-kube-api-access-rg6z4\") pod \"watcher-operator-controller-manager-564965969-fxzcm\" (UID: \"675da051-c9cc-4817-9092-478b3d90d1bf\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-fxzcm" Jan 31 04:03:09 crc kubenswrapper[4667]: E0131 04:03:09.611748 4667 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 04:03:09 crc kubenswrapper[4667]: E0131 04:03:09.611860 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad7389f5-4d9e-4a91-89b8-8f65e425fe83-cert podName:ad7389f5-4d9e-4a91-89b8-8f65e425fe83 nodeName:}" failed. No retries permitted until 2026-01-31 04:03:10.611821142 +0000 UTC m=+914.128156441 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ad7389f5-4d9e-4a91-89b8-8f65e425fe83-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d697dw" (UID: "ad7389f5-4d9e-4a91-89b8-8f65e425fe83") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.642797 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg6z4\" (UniqueName: \"kubernetes.io/projected/675da051-c9cc-4817-9092-478b3d90d1bf-kube-api-access-rg6z4\") pod \"watcher-operator-controller-manager-564965969-fxzcm\" (UID: \"675da051-c9cc-4817-9092-478b3d90d1bf\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-fxzcm" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.654311 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-gzj6r" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.668877 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-2xtdf"] Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.705883 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-964b5"] Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.711298 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-964b5" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.711993 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5af1cf00-3340-481a-9312-cdd15cddbf5d-metrics-certs\") pod \"openstack-operator-controller-manager-fcd7f5fc5-pfnrd\" (UID: \"5af1cf00-3340-481a-9312-cdd15cddbf5d\") " pod="openstack-operators/openstack-operator-controller-manager-fcd7f5fc5-pfnrd" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.712104 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvz28\" (UniqueName: \"kubernetes.io/projected/5af1cf00-3340-481a-9312-cdd15cddbf5d-kube-api-access-mvz28\") pod \"openstack-operator-controller-manager-fcd7f5fc5-pfnrd\" (UID: \"5af1cf00-3340-481a-9312-cdd15cddbf5d\") " pod="openstack-operators/openstack-operator-controller-manager-fcd7f5fc5-pfnrd" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.712144 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5af1cf00-3340-481a-9312-cdd15cddbf5d-webhook-certs\") pod \"openstack-operator-controller-manager-fcd7f5fc5-pfnrd\" (UID: \"5af1cf00-3340-481a-9312-cdd15cddbf5d\") " pod="openstack-operators/openstack-operator-controller-manager-fcd7f5fc5-pfnrd" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.718824 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-b6n7z" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.739017 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-964b5"] Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.798293 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-fxzcm" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.819095 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvz28\" (UniqueName: \"kubernetes.io/projected/5af1cf00-3340-481a-9312-cdd15cddbf5d-kube-api-access-mvz28\") pod \"openstack-operator-controller-manager-fcd7f5fc5-pfnrd\" (UID: \"5af1cf00-3340-481a-9312-cdd15cddbf5d\") " pod="openstack-operators/openstack-operator-controller-manager-fcd7f5fc5-pfnrd" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.819168 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cdgl\" (UniqueName: \"kubernetes.io/projected/1f3ad0ee-dce4-4ed0-90f7-e2c195b6d099-kube-api-access-9cdgl\") pod \"rabbitmq-cluster-operator-manager-668c99d594-964b5\" (UID: \"1f3ad0ee-dce4-4ed0-90f7-e2c195b6d099\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-964b5" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.819214 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5af1cf00-3340-481a-9312-cdd15cddbf5d-webhook-certs\") pod \"openstack-operator-controller-manager-fcd7f5fc5-pfnrd\" (UID: \"5af1cf00-3340-481a-9312-cdd15cddbf5d\") " pod="openstack-operators/openstack-operator-controller-manager-fcd7f5fc5-pfnrd" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.819253 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5af1cf00-3340-481a-9312-cdd15cddbf5d-metrics-certs\") pod \"openstack-operator-controller-manager-fcd7f5fc5-pfnrd\" (UID: \"5af1cf00-3340-481a-9312-cdd15cddbf5d\") " pod="openstack-operators/openstack-operator-controller-manager-fcd7f5fc5-pfnrd" Jan 31 04:03:09 crc kubenswrapper[4667]: E0131 04:03:09.820948 4667 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 04:03:09 crc kubenswrapper[4667]: E0131 04:03:09.821028 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5af1cf00-3340-481a-9312-cdd15cddbf5d-webhook-certs podName:5af1cf00-3340-481a-9312-cdd15cddbf5d nodeName:}" failed. No retries permitted until 2026-01-31 04:03:10.321004894 +0000 UTC m=+913.837340183 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5af1cf00-3340-481a-9312-cdd15cddbf5d-webhook-certs") pod "openstack-operator-controller-manager-fcd7f5fc5-pfnrd" (UID: "5af1cf00-3340-481a-9312-cdd15cddbf5d") : secret "webhook-server-cert" not found Jan 31 04:03:09 crc kubenswrapper[4667]: E0131 04:03:09.821258 4667 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 04:03:09 crc kubenswrapper[4667]: E0131 04:03:09.821387 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5af1cf00-3340-481a-9312-cdd15cddbf5d-metrics-certs podName:5af1cf00-3340-481a-9312-cdd15cddbf5d nodeName:}" failed. No retries permitted until 2026-01-31 04:03:10.321355753 +0000 UTC m=+913.837691272 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5af1cf00-3340-481a-9312-cdd15cddbf5d-metrics-certs") pod "openstack-operator-controller-manager-fcd7f5fc5-pfnrd" (UID: "5af1cf00-3340-481a-9312-cdd15cddbf5d") : secret "metrics-server-cert" not found Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.850007 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvz28\" (UniqueName: \"kubernetes.io/projected/5af1cf00-3340-481a-9312-cdd15cddbf5d-kube-api-access-mvz28\") pod \"openstack-operator-controller-manager-fcd7f5fc5-pfnrd\" (UID: \"5af1cf00-3340-481a-9312-cdd15cddbf5d\") " pod="openstack-operators/openstack-operator-controller-manager-fcd7f5fc5-pfnrd" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.921555 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cdgl\" (UniqueName: \"kubernetes.io/projected/1f3ad0ee-dce4-4ed0-90f7-e2c195b6d099-kube-api-access-9cdgl\") pod \"rabbitmq-cluster-operator-manager-668c99d594-964b5\" (UID: \"1f3ad0ee-dce4-4ed0-90f7-e2c195b6d099\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-964b5" Jan 31 04:03:09 crc kubenswrapper[4667]: I0131 04:03:09.944790 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cdgl\" (UniqueName: \"kubernetes.io/projected/1f3ad0ee-dce4-4ed0-90f7-e2c195b6d099-kube-api-access-9cdgl\") pod \"rabbitmq-cluster-operator-manager-668c99d594-964b5\" (UID: \"1f3ad0ee-dce4-4ed0-90f7-e2c195b6d099\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-964b5" Jan 31 04:03:10 crc kubenswrapper[4667]: I0131 04:03:10.076421 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-964b5" Jan 31 04:03:10 crc kubenswrapper[4667]: I0131 04:03:10.125274 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-54fc54694b-t88kx"] Jan 31 04:03:10 crc kubenswrapper[4667]: I0131 04:03:10.167210 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-rfpnc"] Jan 31 04:03:10 crc kubenswrapper[4667]: I0131 04:03:10.231704 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47cf710a-e856-4094-8ef8-ff115631a236-cert\") pod \"infra-operator-controller-manager-79955696d6-zswlt\" (UID: \"47cf710a-e856-4094-8ef8-ff115631a236\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zswlt" Jan 31 04:03:10 crc kubenswrapper[4667]: E0131 04:03:10.231982 4667 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 04:03:10 crc kubenswrapper[4667]: E0131 04:03:10.232048 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47cf710a-e856-4094-8ef8-ff115631a236-cert podName:47cf710a-e856-4094-8ef8-ff115631a236 nodeName:}" failed. No retries permitted until 2026-01-31 04:03:12.232028744 +0000 UTC m=+915.748364043 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/47cf710a-e856-4094-8ef8-ff115631a236-cert") pod "infra-operator-controller-manager-79955696d6-zswlt" (UID: "47cf710a-e856-4094-8ef8-ff115631a236") : secret "infra-operator-webhook-server-cert" not found Jan 31 04:03:10 crc kubenswrapper[4667]: I0131 04:03:10.315128 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-pqxkg"] Jan 31 04:03:10 crc kubenswrapper[4667]: I0131 04:03:10.330106 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-lzb8l"] Jan 31 04:03:10 crc kubenswrapper[4667]: I0131 04:03:10.332879 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5af1cf00-3340-481a-9312-cdd15cddbf5d-webhook-certs\") pod \"openstack-operator-controller-manager-fcd7f5fc5-pfnrd\" (UID: \"5af1cf00-3340-481a-9312-cdd15cddbf5d\") " pod="openstack-operators/openstack-operator-controller-manager-fcd7f5fc5-pfnrd" Jan 31 04:03:10 crc kubenswrapper[4667]: I0131 04:03:10.332935 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5af1cf00-3340-481a-9312-cdd15cddbf5d-metrics-certs\") pod \"openstack-operator-controller-manager-fcd7f5fc5-pfnrd\" (UID: \"5af1cf00-3340-481a-9312-cdd15cddbf5d\") " pod="openstack-operators/openstack-operator-controller-manager-fcd7f5fc5-pfnrd" Jan 31 04:03:10 crc kubenswrapper[4667]: E0131 04:03:10.333150 4667 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 04:03:10 crc kubenswrapper[4667]: E0131 04:03:10.333218 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5af1cf00-3340-481a-9312-cdd15cddbf5d-metrics-certs podName:5af1cf00-3340-481a-9312-cdd15cddbf5d nodeName:}" failed. No retries permitted until 2026-01-31 04:03:11.33319664 +0000 UTC m=+914.849531929 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5af1cf00-3340-481a-9312-cdd15cddbf5d-metrics-certs") pod "openstack-operator-controller-manager-fcd7f5fc5-pfnrd" (UID: "5af1cf00-3340-481a-9312-cdd15cddbf5d") : secret "metrics-server-cert" not found Jan 31 04:03:10 crc kubenswrapper[4667]: E0131 04:03:10.333264 4667 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 04:03:10 crc kubenswrapper[4667]: E0131 04:03:10.333284 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5af1cf00-3340-481a-9312-cdd15cddbf5d-webhook-certs podName:5af1cf00-3340-481a-9312-cdd15cddbf5d nodeName:}" failed. No retries permitted until 2026-01-31 04:03:11.333277452 +0000 UTC m=+914.849612751 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5af1cf00-3340-481a-9312-cdd15cddbf5d-webhook-certs") pod "openstack-operator-controller-manager-fcd7f5fc5-pfnrd" (UID: "5af1cf00-3340-481a-9312-cdd15cddbf5d") : secret "webhook-server-cert" not found Jan 31 04:03:10 crc kubenswrapper[4667]: I0131 04:03:10.641339 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ad7389f5-4d9e-4a91-89b8-8f65e425fe83-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d697dw\" (UID: \"ad7389f5-4d9e-4a91-89b8-8f65e425fe83\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d697dw" Jan 31 04:03:10 crc kubenswrapper[4667]: E0131 04:03:10.641599 4667 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 04:03:10 crc kubenswrapper[4667]: E0131 04:03:10.641731 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad7389f5-4d9e-4a91-89b8-8f65e425fe83-cert podName:ad7389f5-4d9e-4a91-89b8-8f65e425fe83 nodeName:}" failed. No retries permitted until 2026-01-31 04:03:12.641683359 +0000 UTC m=+916.158018658 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ad7389f5-4d9e-4a91-89b8-8f65e425fe83-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d697dw" (UID: "ad7389f5-4d9e-4a91-89b8-8f65e425fe83") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 04:03:10 crc kubenswrapper[4667]: I0131 04:03:10.715223 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-lgk8x"] Jan 31 04:03:10 crc kubenswrapper[4667]: I0131 04:03:10.740241 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-w6vd6"] Jan 31 04:03:10 crc kubenswrapper[4667]: I0131 04:03:10.762305 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-wmmkk"] Jan 31 04:03:10 crc kubenswrapper[4667]: I0131 04:03:10.778964 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-j629c"] Jan 31 04:03:10 crc kubenswrapper[4667]: I0131 04:03:10.784923 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-kf59p"] Jan 31 04:03:10 crc kubenswrapper[4667]: I0131 04:03:10.804982 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-6cf9z"] Jan 31 04:03:10 crc kubenswrapper[4667]: I0131 04:03:10.851630 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-hxstt"] Jan 31 04:03:10 crc kubenswrapper[4667]: I0131 04:03:10.875322 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-vpt7r"] Jan 31 04:03:10 crc kubenswrapper[4667]: E0131 04:03:10.880999 4667 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6jvzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-6687f8d877-2zj6j_openstack-operators(645ed22c-c54e-495c-af4d-a63635f01dbc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 04:03:10 crc kubenswrapper[4667]: E0131 04:03:10.882390 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-2zj6j" podUID="645ed22c-c54e-495c-af4d-a63635f01dbc" Jan 31 04:03:10 crc kubenswrapper[4667]: E0131 04:03:10.884031 4667 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7xm9w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-55bff696bd-zq7nc_openstack-operators(4dd3097d-038b-459b-be09-25e6a9c28379): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 04:03:10 crc kubenswrapper[4667]: I0131 04:03:10.884073 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-pqxkg" event={"ID":"508d212d-ccda-471c-94aa-96955a519e5a","Type":"ContainerStarted","Data":"0e9a42cce01d0c1f14c557b2885e46eb6dca6ff544febec37a4da0f5f53a7c34"} Jan 31 04:03:10 crc kubenswrapper[4667]: E0131 04:03:10.884250 4667 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qgfrm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-67bf948998-wmmkk_openstack-operators(f955fd59-24f1-42bb-81a8-c17e32274291): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 04:03:10 crc kubenswrapper[4667]: E0131 04:03:10.885293 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-zq7nc" podUID="4dd3097d-038b-459b-be09-25e6a9c28379" Jan 31 04:03:10 crc kubenswrapper[4667]: E0131 04:03:10.885582 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-wmmkk" podUID="f955fd59-24f1-42bb-81a8-c17e32274291" Jan 31 04:03:10 crc kubenswrapper[4667]: I0131 04:03:10.886896 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-b4btm"] Jan 31 04:03:10 crc kubenswrapper[4667]: E0131 04:03:10.885478 4667 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rg6z4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-fxzcm_openstack-operators(675da051-c9cc-4817-9092-478b3d90d1bf): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 04:03:10 crc kubenswrapper[4667]: E0131 04:03:10.888136 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-fxzcm" podUID="675da051-c9cc-4817-9092-478b3d90d1bf" Jan 31 04:03:10 crc kubenswrapper[4667]: E0131 04:03:10.888173 4667 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ntphh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-gzj6r_openstack-operators(340a909d-7419-4721-be11-2c37a3a87022): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 04:03:10 crc kubenswrapper[4667]: I0131 04:03:10.889240 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-54fc54694b-t88kx" event={"ID":"75fa830b-0948-4104-874f-332cb2ea9de2","Type":"ContainerStarted","Data":"3ccc559b0802f1b6912dcd260e57b85243697a4b72123203e355cadda2661bf8"} Jan 31 04:03:10 crc kubenswrapper[4667]: E0131 04:03:10.889468 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-gzj6r" podUID="340a909d-7419-4721-be11-2c37a3a87022" Jan 31 04:03:10 crc kubenswrapper[4667]: E0131 04:03:10.892292 4667 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9cdgl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-964b5_openstack-operators(1f3ad0ee-dce4-4ed0-90f7-e2c195b6d099): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 31 04:03:10 crc kubenswrapper[4667]: E0131 04:03:10.893401 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-964b5" podUID="1f3ad0ee-dce4-4ed0-90f7-e2c195b6d099" Jan 31 04:03:10 crc kubenswrapper[4667]: I0131 04:03:10.894992 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lzb8l" event={"ID":"cfe9238d-7457-43f4-9933-cece048fc3fe","Type":"ContainerStarted","Data":"5139d9d0a5b78d64e4329c9293e4d04b77f7064ffdb84b667a272325f10da745"} Jan 31 04:03:10 crc kubenswrapper[4667]: I0131 04:03:10.898061 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-rfpnc" event={"ID":"f26454ff-c920-4240-84dd-684272f0c0c8","Type":"ContainerStarted","Data":"6e81c81237d3a8df741d84df000cce96532915dc2a8af0cedef135707d0ac9e3"} Jan 31 04:03:10 crc kubenswrapper[4667]: I0131 04:03:10.901150 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-2xtdf" event={"ID":"5743730b-079b-4b07-a87b-932cd637e387","Type":"ContainerStarted","Data":"cdbc03e85c2859c74ecc35e50fa29b2f1dcd4baaa67ac6d534964a24a2306670"} Jan 31 04:03:10 crc kubenswrapper[4667]: I0131 04:03:10.904746 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-2zj6j"] Jan 31 04:03:10 crc kubenswrapper[4667]: I0131 04:03:10.911959 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-zq7nc"] Jan 31 04:03:10 crc kubenswrapper[4667]: I0131 04:03:10.926365 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-x46bg"] Jan 31 04:03:10 crc kubenswrapper[4667]: I0131 04:03:10.945713 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-gzj6r"] Jan 31 04:03:10 crc kubenswrapper[4667]: I0131 04:03:10.954334 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-fxzcm"] Jan 31 04:03:10 crc kubenswrapper[4667]: I0131 04:03:10.960699 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-964b5"] Jan 31 04:03:11 crc kubenswrapper[4667]: I0131 04:03:11.352357 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5af1cf00-3340-481a-9312-cdd15cddbf5d-metrics-certs\") pod \"openstack-operator-controller-manager-fcd7f5fc5-pfnrd\" (UID: \"5af1cf00-3340-481a-9312-cdd15cddbf5d\") " pod="openstack-operators/openstack-operator-controller-manager-fcd7f5fc5-pfnrd" Jan 31 04:03:11 crc kubenswrapper[4667]: I0131 04:03:11.352570 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5af1cf00-3340-481a-9312-cdd15cddbf5d-webhook-certs\") pod \"openstack-operator-controller-manager-fcd7f5fc5-pfnrd\" (UID: \"5af1cf00-3340-481a-9312-cdd15cddbf5d\") " pod="openstack-operators/openstack-operator-controller-manager-fcd7f5fc5-pfnrd" Jan 31 04:03:11 crc kubenswrapper[4667]: E0131 04:03:11.352609 4667 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 04:03:11 crc kubenswrapper[4667]: E0131 04:03:11.352711 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5af1cf00-3340-481a-9312-cdd15cddbf5d-metrics-certs podName:5af1cf00-3340-481a-9312-cdd15cddbf5d nodeName:}" failed. No retries permitted until 2026-01-31 04:03:13.352685293 +0000 UTC m=+916.869020812 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5af1cf00-3340-481a-9312-cdd15cddbf5d-metrics-certs") pod "openstack-operator-controller-manager-fcd7f5fc5-pfnrd" (UID: "5af1cf00-3340-481a-9312-cdd15cddbf5d") : secret "metrics-server-cert" not found Jan 31 04:03:11 crc kubenswrapper[4667]: E0131 04:03:11.353229 4667 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 04:03:11 crc kubenswrapper[4667]: E0131 04:03:11.353264 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5af1cf00-3340-481a-9312-cdd15cddbf5d-webhook-certs podName:5af1cf00-3340-481a-9312-cdd15cddbf5d nodeName:}" failed. No retries permitted until 2026-01-31 04:03:13.353252538 +0000 UTC m=+916.869588047 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5af1cf00-3340-481a-9312-cdd15cddbf5d-webhook-certs") pod "openstack-operator-controller-manager-fcd7f5fc5-pfnrd" (UID: "5af1cf00-3340-481a-9312-cdd15cddbf5d") : secret "webhook-server-cert" not found Jan 31 04:03:11 crc kubenswrapper[4667]: I0131 04:03:11.913002 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-gzj6r" event={"ID":"340a909d-7419-4721-be11-2c37a3a87022","Type":"ContainerStarted","Data":"ed3ddb088515ac7cd27eb3e1b5a58cc637dd427baf26241a2dfc9582f24d7f59"} Jan 31 04:03:11 crc kubenswrapper[4667]: E0131 04:03:11.939201 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-gzj6r" podUID="340a909d-7419-4721-be11-2c37a3a87022" Jan 31 04:03:11 crc kubenswrapper[4667]: I0131 04:03:11.954663 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-2zj6j" event={"ID":"645ed22c-c54e-495c-af4d-a63635f01dbc","Type":"ContainerStarted","Data":"d9aadd86b1f1428c72f1b5dc962e43673fd253f1ae75fdc9574630be2fee3062"} Jan 31 04:03:11 crc kubenswrapper[4667]: E0131 04:03:11.956780 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-2zj6j" podUID="645ed22c-c54e-495c-af4d-a63635f01dbc" Jan 31 04:03:11 crc kubenswrapper[4667]: I0131 04:03:11.959102 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-j629c" event={"ID":"5280851f-6404-45ad-adc7-f41479cb7dc3","Type":"ContainerStarted","Data":"1ee7751bf66d43eb14817a4d875336ddecfe7ec61676b28195fffc404f668eeb"} Jan 31 04:03:11 crc kubenswrapper[4667]: I0131 04:03:11.960929 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-lgk8x" event={"ID":"aa7cd74d-218f-47a1-80f6-db8e475b1ba0","Type":"ContainerStarted","Data":"324aaf819fec0bbcb0f72d0ec4210aaf147eee7de4db24409602e01d496f91c2"} Jan 31 04:03:11 crc kubenswrapper[4667]: I0131 04:03:11.961813 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-vpt7r" event={"ID":"4955e603-5ae1-4c59-8f06-7e4c3f1cae70","Type":"ContainerStarted","Data":"745781161ffd4536914e38c146b4010ce55fee4228c09c95f177ee460c6b6eb7"} Jan 31 04:03:11 crc kubenswrapper[4667]: I0131 04:03:11.963690 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-964b5" event={"ID":"1f3ad0ee-dce4-4ed0-90f7-e2c195b6d099","Type":"ContainerStarted","Data":"535b5ee89451dde6a9c60dc948df60302bad4487bfce0bc18e9b69fe5be543d5"} Jan 31 04:03:11 crc kubenswrapper[4667]: I0131 04:03:11.965797 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-kf59p" event={"ID":"1af3e556-130c-4530-89de-dd64852193c8","Type":"ContainerStarted","Data":"ebbd93985021f997b2d6090869afa6cafed7c288a2e215c958d0358b57d770a8"} Jan 31 04:03:11 crc kubenswrapper[4667]: E0131 04:03:11.974974 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-964b5" podUID="1f3ad0ee-dce4-4ed0-90f7-e2c195b6d099" Jan 31 04:03:11 crc kubenswrapper[4667]: I0131 04:03:11.978631 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-fxzcm" event={"ID":"675da051-c9cc-4817-9092-478b3d90d1bf","Type":"ContainerStarted","Data":"e9260a7555e8e867a14dd10ddd5235e93275deb6245f8864291a4c65d95b83df"} Jan 31 04:03:11 crc kubenswrapper[4667]: E0131 04:03:11.981224 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-fxzcm" podUID="675da051-c9cc-4817-9092-478b3d90d1bf" Jan 31 04:03:11 crc kubenswrapper[4667]: I0131 04:03:11.996611 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-6cf9z" event={"ID":"5108f978-fa68-4add-9f97-5e02aec8c688","Type":"ContainerStarted","Data":"ca2c287eda8b941e1f6ade1c938700e8fae794472e3acdb655aa7ca501f9dde8"} Jan 31 04:03:12 crc kubenswrapper[4667]: I0131 04:03:12.018834 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-w6vd6" event={"ID":"ed4fdc84-4fc5-4e5a-8959-b5ea977c9b56","Type":"ContainerStarted","Data":"e5de37c80a2196070997d1a0a73b65259123b788a97e1b9aad29339847d8f545"} Jan 31 04:03:12 crc kubenswrapper[4667]: I0131 04:03:12.024618 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-zq7nc" event={"ID":"4dd3097d-038b-459b-be09-25e6a9c28379","Type":"ContainerStarted","Data":"e498265d310fa8eee999240114f29195eee67aa88b3cbd20fba05015ad8eeb77"} Jan 31 04:03:12 crc kubenswrapper[4667]: E0131 04:03:12.026721 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-zq7nc" podUID="4dd3097d-038b-459b-be09-25e6a9c28379" Jan 31 04:03:12 crc kubenswrapper[4667]: I0131 04:03:12.031380 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-x46bg" event={"ID":"fc224c93-299f-4f99-b16d-64ab47cb66a8","Type":"ContainerStarted","Data":"921023d596998a1b6cc585c30db3dc9789236c826588d0da2cf5aeed2a77d79c"} Jan 31 04:03:12 crc kubenswrapper[4667]: I0131 04:03:12.044188 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-wmmkk" event={"ID":"f955fd59-24f1-42bb-81a8-c17e32274291","Type":"ContainerStarted","Data":"1292a82d9a8e536ae55b78a6cdd2b4ac6b942be410012272315da166219a49f1"} Jan 31 04:03:12 crc kubenswrapper[4667]: I0131 04:03:12.046721 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-hxstt" event={"ID":"8a4eab04-25a1-4da9-8ee1-0243d4b69073","Type":"ContainerStarted","Data":"18378c6234026b40e1def79b0ac2e47e59bc8289c4732aec2b496753a2f5488c"} Jan 31 04:03:12 crc kubenswrapper[4667]: E0131 04:03:12.053124 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-wmmkk" podUID="f955fd59-24f1-42bb-81a8-c17e32274291" Jan 31 04:03:12 crc kubenswrapper[4667]: I0131 04:03:12.054664 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-b4btm" event={"ID":"7c71998a-5e4c-461c-96f9-3ff67b4619cd","Type":"ContainerStarted","Data":"661971d7f583d07b0a423b5faab7678c623dcec343408555946c0dc5b84419a8"} Jan 31 04:03:12 crc kubenswrapper[4667]: I0131 04:03:12.268452 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47cf710a-e856-4094-8ef8-ff115631a236-cert\") pod \"infra-operator-controller-manager-79955696d6-zswlt\" (UID: \"47cf710a-e856-4094-8ef8-ff115631a236\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zswlt" Jan 31 04:03:12 crc kubenswrapper[4667]: E0131 04:03:12.268682 4667 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 04:03:12 crc kubenswrapper[4667]: E0131 04:03:12.269149 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47cf710a-e856-4094-8ef8-ff115631a236-cert podName:47cf710a-e856-4094-8ef8-ff115631a236 nodeName:}" failed. No retries permitted until 2026-01-31 04:03:16.269121571 +0000 UTC m=+919.785456870 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/47cf710a-e856-4094-8ef8-ff115631a236-cert") pod "infra-operator-controller-manager-79955696d6-zswlt" (UID: "47cf710a-e856-4094-8ef8-ff115631a236") : secret "infra-operator-webhook-server-cert" not found Jan 31 04:03:12 crc kubenswrapper[4667]: I0131 04:03:12.676891 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ad7389f5-4d9e-4a91-89b8-8f65e425fe83-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d697dw\" (UID: \"ad7389f5-4d9e-4a91-89b8-8f65e425fe83\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d697dw" Jan 31 04:03:12 crc kubenswrapper[4667]: E0131 04:03:12.677166 4667 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 04:03:12 crc kubenswrapper[4667]: E0131 04:03:12.677317 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad7389f5-4d9e-4a91-89b8-8f65e425fe83-cert podName:ad7389f5-4d9e-4a91-89b8-8f65e425fe83 nodeName:}" failed. No retries permitted until 2026-01-31 04:03:16.677293546 +0000 UTC m=+920.193628845 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ad7389f5-4d9e-4a91-89b8-8f65e425fe83-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d697dw" (UID: "ad7389f5-4d9e-4a91-89b8-8f65e425fe83") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 04:03:13 crc kubenswrapper[4667]: E0131 04:03:13.094281 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-2zj6j" podUID="645ed22c-c54e-495c-af4d-a63635f01dbc" Jan 31 04:03:13 crc kubenswrapper[4667]: E0131 04:03:13.094624 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-fxzcm" podUID="675da051-c9cc-4817-9092-478b3d90d1bf" Jan 31 04:03:13 crc kubenswrapper[4667]: E0131 04:03:13.094681 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-zq7nc" podUID="4dd3097d-038b-459b-be09-25e6a9c28379" Jan 31 04:03:13 crc kubenswrapper[4667]: E0131 04:03:13.094819 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-964b5" podUID="1f3ad0ee-dce4-4ed0-90f7-e2c195b6d099" Jan 31 04:03:13 crc kubenswrapper[4667]: E0131 04:03:13.094887 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-gzj6r" podUID="340a909d-7419-4721-be11-2c37a3a87022" Jan 31 04:03:13 crc kubenswrapper[4667]: E0131 04:03:13.094931 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-wmmkk" podUID="f955fd59-24f1-42bb-81a8-c17e32274291" Jan 31 04:03:13 crc kubenswrapper[4667]: I0131 04:03:13.402397 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5af1cf00-3340-481a-9312-cdd15cddbf5d-webhook-certs\") pod \"openstack-operator-controller-manager-fcd7f5fc5-pfnrd\" (UID: \"5af1cf00-3340-481a-9312-cdd15cddbf5d\") " pod="openstack-operators/openstack-operator-controller-manager-fcd7f5fc5-pfnrd" Jan 31 04:03:13 crc kubenswrapper[4667]: I0131 04:03:13.402512 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5af1cf00-3340-481a-9312-cdd15cddbf5d-metrics-certs\") pod \"openstack-operator-controller-manager-fcd7f5fc5-pfnrd\" (UID: \"5af1cf00-3340-481a-9312-cdd15cddbf5d\") " pod="openstack-operators/openstack-operator-controller-manager-fcd7f5fc5-pfnrd" Jan 31 04:03:13 crc kubenswrapper[4667]: E0131 04:03:13.403541 4667 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 04:03:13 crc kubenswrapper[4667]: E0131 04:03:13.403607 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5af1cf00-3340-481a-9312-cdd15cddbf5d-webhook-certs podName:5af1cf00-3340-481a-9312-cdd15cddbf5d nodeName:}" failed. No retries permitted until 2026-01-31 04:03:17.403586903 +0000 UTC m=+920.919922202 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5af1cf00-3340-481a-9312-cdd15cddbf5d-webhook-certs") pod "openstack-operator-controller-manager-fcd7f5fc5-pfnrd" (UID: "5af1cf00-3340-481a-9312-cdd15cddbf5d") : secret "webhook-server-cert" not found Jan 31 04:03:13 crc kubenswrapper[4667]: E0131 04:03:13.403712 4667 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 04:03:13 crc kubenswrapper[4667]: E0131 04:03:13.403754 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5af1cf00-3340-481a-9312-cdd15cddbf5d-metrics-certs podName:5af1cf00-3340-481a-9312-cdd15cddbf5d nodeName:}" failed. No retries permitted until 2026-01-31 04:03:17.403743237 +0000 UTC m=+920.920078536 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5af1cf00-3340-481a-9312-cdd15cddbf5d-metrics-certs") pod "openstack-operator-controller-manager-fcd7f5fc5-pfnrd" (UID: "5af1cf00-3340-481a-9312-cdd15cddbf5d") : secret "metrics-server-cert" not found Jan 31 04:03:16 crc kubenswrapper[4667]: I0131 04:03:16.351949 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47cf710a-e856-4094-8ef8-ff115631a236-cert\") pod \"infra-operator-controller-manager-79955696d6-zswlt\" (UID: \"47cf710a-e856-4094-8ef8-ff115631a236\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zswlt" Jan 31 04:03:16 crc kubenswrapper[4667]: E0131 04:03:16.352247 4667 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 04:03:16 crc kubenswrapper[4667]: E0131 04:03:16.352612 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47cf710a-e856-4094-8ef8-ff115631a236-cert podName:47cf710a-e856-4094-8ef8-ff115631a236 nodeName:}" failed. No retries permitted until 2026-01-31 04:03:24.352578457 +0000 UTC m=+927.868913796 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/47cf710a-e856-4094-8ef8-ff115631a236-cert") pod "infra-operator-controller-manager-79955696d6-zswlt" (UID: "47cf710a-e856-4094-8ef8-ff115631a236") : secret "infra-operator-webhook-server-cert" not found Jan 31 04:03:16 crc kubenswrapper[4667]: I0131 04:03:16.758056 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ad7389f5-4d9e-4a91-89b8-8f65e425fe83-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d697dw\" (UID: \"ad7389f5-4d9e-4a91-89b8-8f65e425fe83\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d697dw" Jan 31 04:03:16 crc kubenswrapper[4667]: E0131 04:03:16.758255 4667 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 04:03:16 crc kubenswrapper[4667]: E0131 04:03:16.758361 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad7389f5-4d9e-4a91-89b8-8f65e425fe83-cert podName:ad7389f5-4d9e-4a91-89b8-8f65e425fe83 nodeName:}" failed. No retries permitted until 2026-01-31 04:03:24.758324278 +0000 UTC m=+928.274659587 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ad7389f5-4d9e-4a91-89b8-8f65e425fe83-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d697dw" (UID: "ad7389f5-4d9e-4a91-89b8-8f65e425fe83") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 04:03:17 crc kubenswrapper[4667]: I0131 04:03:17.470221 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5af1cf00-3340-481a-9312-cdd15cddbf5d-metrics-certs\") pod \"openstack-operator-controller-manager-fcd7f5fc5-pfnrd\" (UID: \"5af1cf00-3340-481a-9312-cdd15cddbf5d\") " pod="openstack-operators/openstack-operator-controller-manager-fcd7f5fc5-pfnrd" Jan 31 04:03:17 crc kubenswrapper[4667]: I0131 04:03:17.470408 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5af1cf00-3340-481a-9312-cdd15cddbf5d-webhook-certs\") pod \"openstack-operator-controller-manager-fcd7f5fc5-pfnrd\" (UID: \"5af1cf00-3340-481a-9312-cdd15cddbf5d\") " pod="openstack-operators/openstack-operator-controller-manager-fcd7f5fc5-pfnrd" Jan 31 04:03:17 crc kubenswrapper[4667]: E0131 04:03:17.470641 4667 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 04:03:17 crc kubenswrapper[4667]: E0131 04:03:17.470792 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5af1cf00-3340-481a-9312-cdd15cddbf5d-webhook-certs podName:5af1cf00-3340-481a-9312-cdd15cddbf5d nodeName:}" failed. No retries permitted until 2026-01-31 04:03:25.470772679 +0000 UTC m=+928.987107978 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5af1cf00-3340-481a-9312-cdd15cddbf5d-webhook-certs") pod "openstack-operator-controller-manager-fcd7f5fc5-pfnrd" (UID: "5af1cf00-3340-481a-9312-cdd15cddbf5d") : secret "webhook-server-cert" not found Jan 31 04:03:17 crc kubenswrapper[4667]: E0131 04:03:17.470959 4667 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 04:03:17 crc kubenswrapper[4667]: E0131 04:03:17.471048 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5af1cf00-3340-481a-9312-cdd15cddbf5d-metrics-certs podName:5af1cf00-3340-481a-9312-cdd15cddbf5d nodeName:}" failed. No retries permitted until 2026-01-31 04:03:25.471027446 +0000 UTC m=+928.987362745 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5af1cf00-3340-481a-9312-cdd15cddbf5d-metrics-certs") pod "openstack-operator-controller-manager-fcd7f5fc5-pfnrd" (UID: "5af1cf00-3340-481a-9312-cdd15cddbf5d") : secret "metrics-server-cert" not found Jan 31 04:03:24 crc kubenswrapper[4667]: I0131 04:03:24.380135 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47cf710a-e856-4094-8ef8-ff115631a236-cert\") pod \"infra-operator-controller-manager-79955696d6-zswlt\" (UID: \"47cf710a-e856-4094-8ef8-ff115631a236\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zswlt" Jan 31 04:03:24 crc kubenswrapper[4667]: E0131 04:03:24.380346 4667 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 31 04:03:24 crc kubenswrapper[4667]: E0131 04:03:24.381586 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47cf710a-e856-4094-8ef8-ff115631a236-cert podName:47cf710a-e856-4094-8ef8-ff115631a236 nodeName:}" failed. No retries permitted until 2026-01-31 04:03:40.381563971 +0000 UTC m=+943.897899270 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/47cf710a-e856-4094-8ef8-ff115631a236-cert") pod "infra-operator-controller-manager-79955696d6-zswlt" (UID: "47cf710a-e856-4094-8ef8-ff115631a236") : secret "infra-operator-webhook-server-cert" not found Jan 31 04:03:24 crc kubenswrapper[4667]: I0131 04:03:24.786324 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ad7389f5-4d9e-4a91-89b8-8f65e425fe83-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d697dw\" (UID: \"ad7389f5-4d9e-4a91-89b8-8f65e425fe83\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d697dw" Jan 31 04:03:24 crc kubenswrapper[4667]: E0131 04:03:24.786561 4667 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 04:03:24 crc kubenswrapper[4667]: E0131 04:03:24.786749 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad7389f5-4d9e-4a91-89b8-8f65e425fe83-cert podName:ad7389f5-4d9e-4a91-89b8-8f65e425fe83 nodeName:}" failed. No retries permitted until 2026-01-31 04:03:40.786719606 +0000 UTC m=+944.303054945 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ad7389f5-4d9e-4a91-89b8-8f65e425fe83-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d697dw" (UID: "ad7389f5-4d9e-4a91-89b8-8f65e425fe83") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 31 04:03:25 crc kubenswrapper[4667]: I0131 04:03:25.504519 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5af1cf00-3340-481a-9312-cdd15cddbf5d-metrics-certs\") pod \"openstack-operator-controller-manager-fcd7f5fc5-pfnrd\" (UID: \"5af1cf00-3340-481a-9312-cdd15cddbf5d\") " pod="openstack-operators/openstack-operator-controller-manager-fcd7f5fc5-pfnrd" Jan 31 04:03:25 crc kubenswrapper[4667]: I0131 04:03:25.504668 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5af1cf00-3340-481a-9312-cdd15cddbf5d-webhook-certs\") pod \"openstack-operator-controller-manager-fcd7f5fc5-pfnrd\" (UID: \"5af1cf00-3340-481a-9312-cdd15cddbf5d\") " pod="openstack-operators/openstack-operator-controller-manager-fcd7f5fc5-pfnrd" Jan 31 04:03:25 crc kubenswrapper[4667]: E0131 04:03:25.504810 4667 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 31 04:03:25 crc kubenswrapper[4667]: E0131 04:03:25.504895 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5af1cf00-3340-481a-9312-cdd15cddbf5d-webhook-certs podName:5af1cf00-3340-481a-9312-cdd15cddbf5d nodeName:}" failed. No retries permitted until 2026-01-31 04:03:41.504873589 +0000 UTC m=+945.021209048 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5af1cf00-3340-481a-9312-cdd15cddbf5d-webhook-certs") pod "openstack-operator-controller-manager-fcd7f5fc5-pfnrd" (UID: "5af1cf00-3340-481a-9312-cdd15cddbf5d") : secret "webhook-server-cert" not found Jan 31 04:03:25 crc kubenswrapper[4667]: E0131 04:03:25.505004 4667 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 31 04:03:25 crc kubenswrapper[4667]: E0131 04:03:25.505053 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5af1cf00-3340-481a-9312-cdd15cddbf5d-metrics-certs podName:5af1cf00-3340-481a-9312-cdd15cddbf5d nodeName:}" failed. No retries permitted until 2026-01-31 04:03:41.505042094 +0000 UTC m=+945.021377393 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5af1cf00-3340-481a-9312-cdd15cddbf5d-metrics-certs") pod "openstack-operator-controller-manager-fcd7f5fc5-pfnrd" (UID: "5af1cf00-3340-481a-9312-cdd15cddbf5d") : secret "metrics-server-cert" not found Jan 31 04:03:26 crc kubenswrapper[4667]: E0131 04:03:26.195649 4667 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566" Jan 31 04:03:26 crc kubenswrapper[4667]: E0131 04:03:26.195867 4667 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hgs2q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7dd968899f-w6vd6_openstack-operators(ed4fdc84-4fc5-4e5a-8959-b5ea977c9b56): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 04:03:26 crc kubenswrapper[4667]: E0131 04:03:26.202785 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-w6vd6" podUID="ed4fdc84-4fc5-4e5a-8959-b5ea977c9b56" Jan 31 04:03:27 crc kubenswrapper[4667]: E0131 04:03:27.184534 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-w6vd6" podUID="ed4fdc84-4fc5-4e5a-8959-b5ea977c9b56" Jan 31 04:03:27 crc kubenswrapper[4667]: E0131 04:03:27.802276 4667 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17" Jan 31 04:03:27 crc kubenswrapper[4667]: E0131 04:03:27.802955 4667 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7x6v5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-84f48565d4-hxstt_openstack-operators(8a4eab04-25a1-4da9-8ee1-0243d4b69073): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 04:03:27 crc kubenswrapper[4667]: E0131 04:03:27.804141 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-hxstt" podUID="8a4eab04-25a1-4da9-8ee1-0243d4b69073" Jan 31 04:03:28 crc kubenswrapper[4667]: E0131 04:03:28.189092 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-hxstt" podUID="8a4eab04-25a1-4da9-8ee1-0243d4b69073" Jan 31 04:03:28 crc kubenswrapper[4667]: E0131 04:03:28.403812 4667 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8" Jan 31 04:03:28 crc kubenswrapper[4667]: E0131 04:03:28.404013 4667 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r4gjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5fb775575f-rfpnc_openstack-operators(f26454ff-c920-4240-84dd-684272f0c0c8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 04:03:28 crc kubenswrapper[4667]: E0131 04:03:28.405599 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-rfpnc" podUID="f26454ff-c920-4240-84dd-684272f0c0c8" Jan 31 04:03:29 crc kubenswrapper[4667]: E0131 04:03:29.212748 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-rfpnc" podUID="f26454ff-c920-4240-84dd-684272f0c0c8" Jan 31 04:03:31 crc kubenswrapper[4667]: E0131 04:03:31.243203 4667 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4" Jan 31 04:03:31 crc kubenswrapper[4667]: E0131 04:03:31.244102 4667 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9gcwf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-vpt7r_openstack-operators(4955e603-5ae1-4c59-8f06-7e4c3f1cae70): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 04:03:31 crc kubenswrapper[4667]: E0131 04:03:31.245402 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-vpt7r" podUID="4955e603-5ae1-4c59-8f06-7e4c3f1cae70" Jan 31 04:03:31 crc kubenswrapper[4667]: E0131 04:03:31.902808 4667 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4" Jan 31 04:03:31 crc kubenswrapper[4667]: E0131 04:03:31.903077 4667 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mmmx6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-8886f4c47-j629c_openstack-operators(5280851f-6404-45ad-adc7-f41479cb7dc3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 04:03:31 crc kubenswrapper[4667]: E0131 04:03:31.905003 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-j629c" podUID="5280851f-6404-45ad-adc7-f41479cb7dc3" Jan 31 04:03:32 crc kubenswrapper[4667]: E0131 04:03:32.238976 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4\\\"\"" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-j629c" podUID="5280851f-6404-45ad-adc7-f41479cb7dc3" Jan 31 04:03:32 crc kubenswrapper[4667]: E0131 04:03:32.238975 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-vpt7r" podUID="4955e603-5ae1-4c59-8f06-7e4c3f1cae70" Jan 31 04:03:33 crc kubenswrapper[4667]: E0131 04:03:33.845378 4667 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10" Jan 31 04:03:33 crc kubenswrapper[4667]: E0131 04:03:33.845621 4667 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8bvsg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-69d6db494d-lzb8l_openstack-operators(cfe9238d-7457-43f4-9933-cece048fc3fe): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 04:03:33 crc kubenswrapper[4667]: E0131 04:03:33.846912 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lzb8l" podUID="cfe9238d-7457-43f4-9933-cece048fc3fe" Jan 31 04:03:34 crc kubenswrapper[4667]: E0131 04:03:34.250696 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10\\\"\"" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lzb8l" podUID="cfe9238d-7457-43f4-9933-cece048fc3fe" Jan 31 04:03:34 crc kubenswrapper[4667]: E0131 04:03:34.524894 4667 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a" Jan 31 04:03:34 crc kubenswrapper[4667]: E0131 04:03:34.525128 4667 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hnmvh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-64b5b76f97-b4btm_openstack-operators(7c71998a-5e4c-461c-96f9-3ff67b4619cd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 04:03:34 crc kubenswrapper[4667]: E0131 04:03:34.527019 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-b4btm" podUID="7c71998a-5e4c-461c-96f9-3ff67b4619cd" Jan 31 04:03:35 crc kubenswrapper[4667]: E0131 04:03:35.171829 4667 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382" Jan 31 04:03:35 crc kubenswrapper[4667]: E0131 04:03:35.172070 4667 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r8gtz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-x46bg_openstack-operators(fc224c93-299f-4f99-b16d-64ab47cb66a8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 04:03:35 crc kubenswrapper[4667]: E0131 04:03:35.173224 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-x46bg" podUID="fc224c93-299f-4f99-b16d-64ab47cb66a8" Jan 31 04:03:35 crc kubenswrapper[4667]: E0131 04:03:35.256952 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-b4btm" podUID="7c71998a-5e4c-461c-96f9-3ff67b4619cd" Jan 31 04:03:35 crc kubenswrapper[4667]: E0131 04:03:35.257303 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-x46bg" podUID="fc224c93-299f-4f99-b16d-64ab47cb66a8" Jan 31 04:03:40 crc kubenswrapper[4667]: I0131 04:03:40.463920 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47cf710a-e856-4094-8ef8-ff115631a236-cert\") pod \"infra-operator-controller-manager-79955696d6-zswlt\" (UID: \"47cf710a-e856-4094-8ef8-ff115631a236\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zswlt" Jan 31 04:03:40 crc kubenswrapper[4667]: I0131 04:03:40.469308 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47cf710a-e856-4094-8ef8-ff115631a236-cert\") pod \"infra-operator-controller-manager-79955696d6-zswlt\" (UID: \"47cf710a-e856-4094-8ef8-ff115631a236\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zswlt" Jan 31 04:03:40 crc kubenswrapper[4667]: I0131 04:03:40.517676 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-zswlt" Jan 31 04:03:40 crc kubenswrapper[4667]: I0131 04:03:40.871344 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ad7389f5-4d9e-4a91-89b8-8f65e425fe83-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d697dw\" (UID: \"ad7389f5-4d9e-4a91-89b8-8f65e425fe83\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d697dw" Jan 31 04:03:40 crc kubenswrapper[4667]: I0131 04:03:40.877669 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ad7389f5-4d9e-4a91-89b8-8f65e425fe83-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d697dw\" (UID: \"ad7389f5-4d9e-4a91-89b8-8f65e425fe83\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d697dw" Jan 31 04:03:40 crc kubenswrapper[4667]: I0131 04:03:40.969466 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d697dw" Jan 31 04:03:41 crc kubenswrapper[4667]: I0131 04:03:41.587154 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5af1cf00-3340-481a-9312-cdd15cddbf5d-webhook-certs\") pod \"openstack-operator-controller-manager-fcd7f5fc5-pfnrd\" (UID: \"5af1cf00-3340-481a-9312-cdd15cddbf5d\") " pod="openstack-operators/openstack-operator-controller-manager-fcd7f5fc5-pfnrd" Jan 31 04:03:41 crc kubenswrapper[4667]: I0131 04:03:41.587628 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5af1cf00-3340-481a-9312-cdd15cddbf5d-metrics-certs\") pod \"openstack-operator-controller-manager-fcd7f5fc5-pfnrd\" (UID: \"5af1cf00-3340-481a-9312-cdd15cddbf5d\") " pod="openstack-operators/openstack-operator-controller-manager-fcd7f5fc5-pfnrd" Jan 31 04:03:41 crc kubenswrapper[4667]: I0131 04:03:41.605434 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5af1cf00-3340-481a-9312-cdd15cddbf5d-metrics-certs\") pod \"openstack-operator-controller-manager-fcd7f5fc5-pfnrd\" (UID: \"5af1cf00-3340-481a-9312-cdd15cddbf5d\") " pod="openstack-operators/openstack-operator-controller-manager-fcd7f5fc5-pfnrd" Jan 31 04:03:41 crc kubenswrapper[4667]: I0131 04:03:41.606020 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5af1cf00-3340-481a-9312-cdd15cddbf5d-webhook-certs\") pod \"openstack-operator-controller-manager-fcd7f5fc5-pfnrd\" (UID: \"5af1cf00-3340-481a-9312-cdd15cddbf5d\") " pod="openstack-operators/openstack-operator-controller-manager-fcd7f5fc5-pfnrd" Jan 31 04:03:41 crc kubenswrapper[4667]: I0131 04:03:41.709517 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-fcd7f5fc5-pfnrd" Jan 31 04:03:42 crc kubenswrapper[4667]: E0131 04:03:42.719960 4667 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241" Jan 31 04:03:42 crc kubenswrapper[4667]: E0131 04:03:42.720207 4667 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ntphh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-gzj6r_openstack-operators(340a909d-7419-4721-be11-2c37a3a87022): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 04:03:42 crc kubenswrapper[4667]: E0131 04:03:42.722100 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-gzj6r" podUID="340a909d-7419-4721-be11-2c37a3a87022" Jan 31 04:03:44 crc kubenswrapper[4667]: E0131 04:03:44.537040 4667 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 31 04:03:44 crc kubenswrapper[4667]: E0131 04:03:44.537251 4667 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9cdgl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-964b5_openstack-operators(1f3ad0ee-dce4-4ed0-90f7-e2c195b6d099): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 04:03:44 crc kubenswrapper[4667]: E0131 04:03:44.538466 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-964b5" podUID="1f3ad0ee-dce4-4ed0-90f7-e2c195b6d099" Jan 31 04:03:45 crc kubenswrapper[4667]: I0131 04:03:45.031176 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d697dw"] Jan 31 04:03:45 crc kubenswrapper[4667]: I0131 04:03:45.145905 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-fcd7f5fc5-pfnrd"] Jan 31 04:03:45 crc kubenswrapper[4667]: W0131 04:03:45.176126 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5af1cf00_3340_481a_9312_cdd15cddbf5d.slice/crio-eeb4131e86d3136fe15c623a76d16d3e77de7ad3534c599dd744b594044ae3ca WatchSource:0}: Error finding container eeb4131e86d3136fe15c623a76d16d3e77de7ad3534c599dd744b594044ae3ca: Status 404 returned error can't find the container with id eeb4131e86d3136fe15c623a76d16d3e77de7ad3534c599dd744b594044ae3ca Jan 31 04:03:45 crc kubenswrapper[4667]: I0131 04:03:45.278803 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-zswlt"] Jan 31 04:03:45 crc kubenswrapper[4667]: I0131 04:03:45.353277 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-6cf9z" event={"ID":"5108f978-fa68-4add-9f97-5e02aec8c688","Type":"ContainerStarted","Data":"1f80b3f3e4710986da9d2ffb711fae4b2fca1d9855c1d32524b1c81491a19130"} Jan 31 04:03:45 crc kubenswrapper[4667]: I0131 04:03:45.354052 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-6cf9z" Jan 31 04:03:45 crc kubenswrapper[4667]: I0131 04:03:45.378943 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-54fc54694b-t88kx" event={"ID":"75fa830b-0948-4104-874f-332cb2ea9de2","Type":"ContainerStarted","Data":"23d39ccea49dc593d55c51ff75ae6909959c258dba0312ade94ed5daa2f85804"} Jan 31 04:03:45 crc kubenswrapper[4667]: I0131 04:03:45.379016 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-54fc54694b-t88kx" Jan 31 04:03:45 crc kubenswrapper[4667]: I0131 04:03:45.384077 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-kf59p" event={"ID":"1af3e556-130c-4530-89de-dd64852193c8","Type":"ContainerStarted","Data":"5fc45fad2c16f0045c5581ec680cbedb33dc1f92f5333f149f33c89eb0188d00"} Jan 31 04:03:45 crc kubenswrapper[4667]: I0131 04:03:45.384494 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-kf59p" Jan 31 04:03:45 crc kubenswrapper[4667]: I0131 04:03:45.385819 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-lgk8x" event={"ID":"aa7cd74d-218f-47a1-80f6-db8e475b1ba0","Type":"ContainerStarted","Data":"aa356c2335e2a6c3a995b28f625859ceaf7820b5258a23593e050c5993903a08"} Jan 31 04:03:45 crc kubenswrapper[4667]: I0131 04:03:45.386557 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-lgk8x" Jan 31 04:03:45 crc kubenswrapper[4667]: I0131 04:03:45.388181 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-fcd7f5fc5-pfnrd" event={"ID":"5af1cf00-3340-481a-9312-cdd15cddbf5d","Type":"ContainerStarted","Data":"eeb4131e86d3136fe15c623a76d16d3e77de7ad3534c599dd744b594044ae3ca"} Jan 31 04:03:45 crc kubenswrapper[4667]: I0131 04:03:45.396937 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-6cf9z" podStartSLOduration=6.34001044 podStartE2EDuration="37.396920807s" podCreationTimestamp="2026-01-31 04:03:08 +0000 UTC" firstStartedPulling="2026-01-31 04:03:10.880206057 +0000 UTC m=+914.396541356" lastFinishedPulling="2026-01-31 04:03:41.937116424 +0000 UTC m=+945.453451723" observedRunningTime="2026-01-31 04:03:45.391077073 +0000 UTC m=+948.907412362" watchObservedRunningTime="2026-01-31 04:03:45.396920807 +0000 UTC m=+948.913256096" Jan 31 04:03:45 crc kubenswrapper[4667]: I0131 04:03:45.396988 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d697dw" event={"ID":"ad7389f5-4d9e-4a91-89b8-8f65e425fe83","Type":"ContainerStarted","Data":"df9ae78392a4aa9510207c5c4ff6cf0c802c2cd3236dccfcafd794ea1c2fc479"} Jan 31 04:03:45 crc kubenswrapper[4667]: I0131 04:03:45.419448 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-2xtdf" event={"ID":"5743730b-079b-4b07-a87b-932cd637e387","Type":"ContainerStarted","Data":"3fc1bd5120f5a90dc0a2b31ba2bc578a6f153043d314023cab3250f0444017ca"} Jan 31 04:03:45 crc kubenswrapper[4667]: I0131 04:03:45.419975 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-2xtdf" Jan 31 04:03:45 crc kubenswrapper[4667]: I0131 04:03:45.426166 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-54fc54694b-t88kx" podStartSLOduration=4.898149197 podStartE2EDuration="37.42614454s" podCreationTimestamp="2026-01-31 04:03:08 +0000 UTC" firstStartedPulling="2026-01-31 04:03:10.183637065 +0000 UTC m=+913.699972354" lastFinishedPulling="2026-01-31 04:03:42.711632398 +0000 UTC m=+946.227967697" observedRunningTime="2026-01-31 04:03:45.415569241 +0000 UTC m=+948.931904530" watchObservedRunningTime="2026-01-31 04:03:45.42614454 +0000 UTC m=+948.942479839" Jan 31 04:03:45 crc kubenswrapper[4667]: I0131 04:03:45.458216 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-lgk8x" podStartSLOduration=6.352909871 podStartE2EDuration="37.458197478s" podCreationTimestamp="2026-01-31 04:03:08 +0000 UTC" firstStartedPulling="2026-01-31 04:03:10.831801857 +0000 UTC m=+914.348137156" lastFinishedPulling="2026-01-31 04:03:41.937089464 +0000 UTC m=+945.453424763" observedRunningTime="2026-01-31 04:03:45.456728269 +0000 UTC m=+948.973063588" watchObservedRunningTime="2026-01-31 04:03:45.458197478 +0000 UTC m=+948.974532777" Jan 31 04:03:45 crc kubenswrapper[4667]: I0131 04:03:45.586689 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-2xtdf" podStartSLOduration=5.543844872 podStartE2EDuration="37.586672395s" podCreationTimestamp="2026-01-31 04:03:08 +0000 UTC" firstStartedPulling="2026-01-31 04:03:09.894263971 +0000 UTC m=+913.410599270" lastFinishedPulling="2026-01-31 04:03:41.937091494 +0000 UTC m=+945.453426793" observedRunningTime="2026-01-31 04:03:45.585153145 +0000 UTC m=+949.101488444" watchObservedRunningTime="2026-01-31 04:03:45.586672395 +0000 UTC m=+949.103007694" Jan 31 04:03:45 crc kubenswrapper[4667]: I0131 04:03:45.589890 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-kf59p" podStartSLOduration=6.526214974 podStartE2EDuration="37.5898853s" podCreationTimestamp="2026-01-31 04:03:08 +0000 UTC" firstStartedPulling="2026-01-31 04:03:10.873084409 +0000 UTC m=+914.389419708" lastFinishedPulling="2026-01-31 04:03:41.936754735 +0000 UTC m=+945.453090034" observedRunningTime="2026-01-31 04:03:45.511193829 +0000 UTC m=+949.027529128" watchObservedRunningTime="2026-01-31 04:03:45.5898853 +0000 UTC m=+949.106220599" Jan 31 04:03:46 crc kubenswrapper[4667]: I0131 04:03:46.441288 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-zq7nc" event={"ID":"4dd3097d-038b-459b-be09-25e6a9c28379","Type":"ContainerStarted","Data":"6144c5ab9e0015cde757de0cb02b4851ec89c41f9ce293eb6377ee3dbb96c283"} Jan 31 04:03:46 crc kubenswrapper[4667]: I0131 04:03:46.442556 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-zq7nc" Jan 31 04:03:46 crc kubenswrapper[4667]: I0131 04:03:46.450893 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-rfpnc" event={"ID":"f26454ff-c920-4240-84dd-684272f0c0c8","Type":"ContainerStarted","Data":"9fed6cac5033dd00f9bae5aaa628981c2638f9b9e4d034c13fa9775385b39f28"} Jan 31 04:03:46 crc kubenswrapper[4667]: I0131 04:03:46.451158 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-rfpnc" Jan 31 04:03:46 crc kubenswrapper[4667]: I0131 04:03:46.455434 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-pqxkg" event={"ID":"508d212d-ccda-471c-94aa-96955a519e5a","Type":"ContainerStarted","Data":"1c7e044f0bc883fec7db0bf1a230b7f835c4548e0d6421818b3aa2f0a8cc5be9"} Jan 31 04:03:46 crc kubenswrapper[4667]: I0131 04:03:46.455658 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-pqxkg" Jan 31 04:03:46 crc kubenswrapper[4667]: I0131 04:03:46.456989 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-wmmkk" event={"ID":"f955fd59-24f1-42bb-81a8-c17e32274291","Type":"ContainerStarted","Data":"20f29b009dae77b0e13c1f2d6bb3e579fafea08a5b79ef07cd6cf8702a1b1d51"} Jan 31 04:03:46 crc kubenswrapper[4667]: I0131 04:03:46.457365 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-wmmkk" Jan 31 04:03:46 crc kubenswrapper[4667]: I0131 04:03:46.475208 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-hxstt" event={"ID":"8a4eab04-25a1-4da9-8ee1-0243d4b69073","Type":"ContainerStarted","Data":"6080e8cfe047b5e2ba2567ef374a7f0a1a72e098c47190390c634c193df817f4"} Jan 31 04:03:46 crc kubenswrapper[4667]: I0131 04:03:46.475960 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-hxstt" Jan 31 04:03:46 crc kubenswrapper[4667]: I0131 04:03:46.475954 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-zq7nc" podStartSLOduration=4.71831894 podStartE2EDuration="38.475825831s" podCreationTimestamp="2026-01-31 04:03:08 +0000 UTC" firstStartedPulling="2026-01-31 04:03:10.883800852 +0000 UTC m=+914.400136151" lastFinishedPulling="2026-01-31 04:03:44.641307723 +0000 UTC m=+948.157643042" observedRunningTime="2026-01-31 04:03:46.473187101 +0000 UTC m=+949.989522400" watchObservedRunningTime="2026-01-31 04:03:46.475825831 +0000 UTC m=+949.992161130" Jan 31 04:03:46 crc kubenswrapper[4667]: I0131 04:03:46.499380 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-fxzcm" event={"ID":"675da051-c9cc-4817-9092-478b3d90d1bf","Type":"ContainerStarted","Data":"d9894a764019f673ee236c78221db9f34ab157bf4c7843896ddf58ac7079f7b1"} Jan 31 04:03:46 crc kubenswrapper[4667]: I0131 04:03:46.500100 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-fxzcm" Jan 31 04:03:46 crc kubenswrapper[4667]: I0131 04:03:46.513467 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-zswlt" event={"ID":"47cf710a-e856-4094-8ef8-ff115631a236","Type":"ContainerStarted","Data":"af3983d7a44d0eb371ee39f5758756bf0d96ed983540d8d7b764f3dc41a9146a"} Jan 31 04:03:46 crc kubenswrapper[4667]: I0131 04:03:46.520865 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-pqxkg" podStartSLOduration=6.136940759 podStartE2EDuration="38.520831431s" podCreationTimestamp="2026-01-31 04:03:08 +0000 UTC" firstStartedPulling="2026-01-31 04:03:10.327738176 +0000 UTC m=+913.844073475" lastFinishedPulling="2026-01-31 04:03:42.711628858 +0000 UTC m=+946.227964147" observedRunningTime="2026-01-31 04:03:46.518986582 +0000 UTC m=+950.035321881" watchObservedRunningTime="2026-01-31 04:03:46.520831431 +0000 UTC m=+950.037166730" Jan 31 04:03:46 crc kubenswrapper[4667]: I0131 04:03:46.527792 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-w6vd6" event={"ID":"ed4fdc84-4fc5-4e5a-8959-b5ea977c9b56","Type":"ContainerStarted","Data":"6e02cc8e64d7d29085c21314237a861a71b0df682b5a55b499f81eb5ec9152e9"} Jan 31 04:03:46 crc kubenswrapper[4667]: I0131 04:03:46.528530 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-w6vd6" Jan 31 04:03:46 crc kubenswrapper[4667]: I0131 04:03:46.544149 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-2zj6j" event={"ID":"645ed22c-c54e-495c-af4d-a63635f01dbc","Type":"ContainerStarted","Data":"1de98e553ccc8143d4750c21adac8d1616c9e80c2cf879a073d5521747de4f0f"} Jan 31 04:03:46 crc kubenswrapper[4667]: I0131 04:03:46.544887 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-2zj6j" Jan 31 04:03:46 crc kubenswrapper[4667]: I0131 04:03:46.558234 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-vpt7r" event={"ID":"4955e603-5ae1-4c59-8f06-7e4c3f1cae70","Type":"ContainerStarted","Data":"8f1055b0bf77f228f446d26acbabdeb936966f5f0a38bf15c0070ab6cc479ec6"} Jan 31 04:03:46 crc kubenswrapper[4667]: I0131 04:03:46.559179 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-vpt7r" Jan 31 04:03:46 crc kubenswrapper[4667]: I0131 04:03:46.564976 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-fcd7f5fc5-pfnrd" event={"ID":"5af1cf00-3340-481a-9312-cdd15cddbf5d","Type":"ContainerStarted","Data":"ad114d5e8671dffa361be0dffa491f5d25d55ffe60c8368442c15a10653018dd"} Jan 31 04:03:46 crc kubenswrapper[4667]: I0131 04:03:46.640301 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-rfpnc" podStartSLOduration=4.149220788 podStartE2EDuration="38.64027304s" podCreationTimestamp="2026-01-31 04:03:08 +0000 UTC" firstStartedPulling="2026-01-31 04:03:10.183046669 +0000 UTC m=+913.699381968" lastFinishedPulling="2026-01-31 04:03:44.674098921 +0000 UTC m=+948.190434220" observedRunningTime="2026-01-31 04:03:46.639201252 +0000 UTC m=+950.155536551" watchObservedRunningTime="2026-01-31 04:03:46.64027304 +0000 UTC m=+950.156608339" Jan 31 04:03:46 crc kubenswrapper[4667]: I0131 04:03:46.641498 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-wmmkk" podStartSLOduration=4.882879282 podStartE2EDuration="38.641492982s" podCreationTimestamp="2026-01-31 04:03:08 +0000 UTC" firstStartedPulling="2026-01-31 04:03:10.884119731 +0000 UTC m=+914.400455030" lastFinishedPulling="2026-01-31 04:03:44.642733441 +0000 UTC m=+948.159068730" observedRunningTime="2026-01-31 04:03:46.572668202 +0000 UTC m=+950.089003501" watchObservedRunningTime="2026-01-31 04:03:46.641492982 +0000 UTC m=+950.157828281" Jan 31 04:03:46 crc kubenswrapper[4667]: I0131 04:03:46.672815 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-vpt7r" podStartSLOduration=4.870036182 podStartE2EDuration="38.67279431s" podCreationTimestamp="2026-01-31 04:03:08 +0000 UTC" firstStartedPulling="2026-01-31 04:03:10.878730378 +0000 UTC m=+914.395065677" lastFinishedPulling="2026-01-31 04:03:44.681488506 +0000 UTC m=+948.197823805" observedRunningTime="2026-01-31 04:03:46.671116706 +0000 UTC m=+950.187452005" watchObservedRunningTime="2026-01-31 04:03:46.67279431 +0000 UTC m=+950.189129609" Jan 31 04:03:46 crc kubenswrapper[4667]: I0131 04:03:46.768599 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-fxzcm" podStartSLOduration=5.012619573 podStartE2EDuration="38.768575223s" podCreationTimestamp="2026-01-31 04:03:08 +0000 UTC" firstStartedPulling="2026-01-31 04:03:10.885374084 +0000 UTC m=+914.401709383" lastFinishedPulling="2026-01-31 04:03:44.641329734 +0000 UTC m=+948.157665033" observedRunningTime="2026-01-31 04:03:46.733547587 +0000 UTC m=+950.249882886" watchObservedRunningTime="2026-01-31 04:03:46.768575223 +0000 UTC m=+950.284910522" Jan 31 04:03:46 crc kubenswrapper[4667]: I0131 04:03:46.833216 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-w6vd6" podStartSLOduration=4.942144949 podStartE2EDuration="38.833189662s" podCreationTimestamp="2026-01-31 04:03:08 +0000 UTC" firstStartedPulling="2026-01-31 04:03:10.832469325 +0000 UTC m=+914.348804624" lastFinishedPulling="2026-01-31 04:03:44.723514038 +0000 UTC m=+948.239849337" observedRunningTime="2026-01-31 04:03:46.827966594 +0000 UTC m=+950.344301893" watchObservedRunningTime="2026-01-31 04:03:46.833189662 +0000 UTC m=+950.349524961" Jan 31 04:03:46 crc kubenswrapper[4667]: I0131 04:03:46.950415 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-2zj6j" podStartSLOduration=5.197710838 podStartE2EDuration="38.950395912s" podCreationTimestamp="2026-01-31 04:03:08 +0000 UTC" firstStartedPulling="2026-01-31 04:03:10.880878345 +0000 UTC m=+914.397213644" lastFinishedPulling="2026-01-31 04:03:44.633563419 +0000 UTC m=+948.149898718" observedRunningTime="2026-01-31 04:03:46.887606362 +0000 UTC m=+950.403941661" watchObservedRunningTime="2026-01-31 04:03:46.950395912 +0000 UTC m=+950.466731211" Jan 31 04:03:46 crc kubenswrapper[4667]: I0131 04:03:46.954477 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-hxstt" podStartSLOduration=5.126212438 podStartE2EDuration="38.95446661s" podCreationTimestamp="2026-01-31 04:03:08 +0000 UTC" firstStartedPulling="2026-01-31 04:03:10.873436078 +0000 UTC m=+914.389771377" lastFinishedPulling="2026-01-31 04:03:44.70169025 +0000 UTC m=+948.218025549" observedRunningTime="2026-01-31 04:03:46.949073747 +0000 UTC m=+950.465409046" watchObservedRunningTime="2026-01-31 04:03:46.95446661 +0000 UTC m=+950.470801909" Jan 31 04:03:47 crc kubenswrapper[4667]: I0131 04:03:47.015046 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-fcd7f5fc5-pfnrd" podStartSLOduration=38.015021811 podStartE2EDuration="38.015021811s" podCreationTimestamp="2026-01-31 04:03:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:03:47.014869927 +0000 UTC m=+950.531205216" watchObservedRunningTime="2026-01-31 04:03:47.015021811 +0000 UTC m=+950.531357110" Jan 31 04:03:47 crc kubenswrapper[4667]: I0131 04:03:47.575010 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-fcd7f5fc5-pfnrd" Jan 31 04:03:48 crc kubenswrapper[4667]: I0131 04:03:48.121770 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8kcsw"] Jan 31 04:03:48 crc kubenswrapper[4667]: I0131 04:03:48.123518 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8kcsw" Jan 31 04:03:48 crc kubenswrapper[4667]: I0131 04:03:48.152688 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8kcsw"] Jan 31 04:03:48 crc kubenswrapper[4667]: I0131 04:03:48.207531 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6721fd64-d815-4fa7-8332-76eebcfad816-utilities\") pod \"certified-operators-8kcsw\" (UID: \"6721fd64-d815-4fa7-8332-76eebcfad816\") " pod="openshift-marketplace/certified-operators-8kcsw" Jan 31 04:03:48 crc kubenswrapper[4667]: I0131 04:03:48.207585 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb54v\" (UniqueName: \"kubernetes.io/projected/6721fd64-d815-4fa7-8332-76eebcfad816-kube-api-access-cb54v\") pod \"certified-operators-8kcsw\" (UID: \"6721fd64-d815-4fa7-8332-76eebcfad816\") " pod="openshift-marketplace/certified-operators-8kcsw" Jan 31 04:03:48 crc kubenswrapper[4667]: I0131 04:03:48.207611 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6721fd64-d815-4fa7-8332-76eebcfad816-catalog-content\") pod \"certified-operators-8kcsw\" (UID: \"6721fd64-d815-4fa7-8332-76eebcfad816\") " pod="openshift-marketplace/certified-operators-8kcsw" Jan 31 04:03:48 crc kubenswrapper[4667]: I0131 04:03:48.309022 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6721fd64-d815-4fa7-8332-76eebcfad816-utilities\") pod \"certified-operators-8kcsw\" (UID: \"6721fd64-d815-4fa7-8332-76eebcfad816\") " pod="openshift-marketplace/certified-operators-8kcsw" Jan 31 04:03:48 crc kubenswrapper[4667]: I0131 04:03:48.309069 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb54v\" (UniqueName: \"kubernetes.io/projected/6721fd64-d815-4fa7-8332-76eebcfad816-kube-api-access-cb54v\") pod \"certified-operators-8kcsw\" (UID: \"6721fd64-d815-4fa7-8332-76eebcfad816\") " pod="openshift-marketplace/certified-operators-8kcsw" Jan 31 04:03:48 crc kubenswrapper[4667]: I0131 04:03:48.309098 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6721fd64-d815-4fa7-8332-76eebcfad816-catalog-content\") pod \"certified-operators-8kcsw\" (UID: \"6721fd64-d815-4fa7-8332-76eebcfad816\") " pod="openshift-marketplace/certified-operators-8kcsw" Jan 31 04:03:48 crc kubenswrapper[4667]: I0131 04:03:48.309554 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6721fd64-d815-4fa7-8332-76eebcfad816-catalog-content\") pod \"certified-operators-8kcsw\" (UID: \"6721fd64-d815-4fa7-8332-76eebcfad816\") " pod="openshift-marketplace/certified-operators-8kcsw" Jan 31 04:03:48 crc kubenswrapper[4667]: I0131 04:03:48.310238 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6721fd64-d815-4fa7-8332-76eebcfad816-utilities\") pod \"certified-operators-8kcsw\" (UID: \"6721fd64-d815-4fa7-8332-76eebcfad816\") " pod="openshift-marketplace/certified-operators-8kcsw" Jan 31 04:03:48 crc kubenswrapper[4667]: I0131 04:03:48.338770 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb54v\" (UniqueName: \"kubernetes.io/projected/6721fd64-d815-4fa7-8332-76eebcfad816-kube-api-access-cb54v\") pod \"certified-operators-8kcsw\" (UID: \"6721fd64-d815-4fa7-8332-76eebcfad816\") " pod="openshift-marketplace/certified-operators-8kcsw" Jan 31 04:03:48 crc kubenswrapper[4667]: I0131 04:03:48.438514 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8kcsw" Jan 31 04:03:48 crc kubenswrapper[4667]: I0131 04:03:48.585341 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-j629c" event={"ID":"5280851f-6404-45ad-adc7-f41479cb7dc3","Type":"ContainerStarted","Data":"cd19e61c05dfc76304847e77a916ce1da0c2c56127c94a9b01f124fbeb23f865"} Jan 31 04:03:48 crc kubenswrapper[4667]: I0131 04:03:48.586002 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-j629c" Jan 31 04:03:48 crc kubenswrapper[4667]: I0131 04:03:48.601129 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-j629c" podStartSLOduration=3.846209475 podStartE2EDuration="40.601116729s" podCreationTimestamp="2026-01-31 04:03:08 +0000 UTC" firstStartedPulling="2026-01-31 04:03:10.880514445 +0000 UTC m=+914.396849744" lastFinishedPulling="2026-01-31 04:03:47.635421699 +0000 UTC m=+951.151756998" observedRunningTime="2026-01-31 04:03:48.599548768 +0000 UTC m=+952.115884067" watchObservedRunningTime="2026-01-31 04:03:48.601116729 +0000 UTC m=+952.117452028" Jan 31 04:03:49 crc kubenswrapper[4667]: I0131 04:03:49.134168 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8kcsw"] Jan 31 04:03:49 crc kubenswrapper[4667]: I0131 04:03:49.267195 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-kf59p" Jan 31 04:03:51 crc kubenswrapper[4667]: I0131 04:03:51.714907 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-fcd7f5fc5-pfnrd" Jan 31 04:03:51 crc kubenswrapper[4667]: W0131 04:03:51.883313 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6721fd64_d815_4fa7_8332_76eebcfad816.slice/crio-3076f6e263c78ce87e5375e91ffe437344f3055e2b00b8e5d410675cd4e73209 WatchSource:0}: Error finding container 3076f6e263c78ce87e5375e91ffe437344f3055e2b00b8e5d410675cd4e73209: Status 404 returned error can't find the container with id 3076f6e263c78ce87e5375e91ffe437344f3055e2b00b8e5d410675cd4e73209 Jan 31 04:03:52 crc kubenswrapper[4667]: I0131 04:03:52.546136 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lqgdr"] Jan 31 04:03:52 crc kubenswrapper[4667]: I0131 04:03:52.547787 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lqgdr" Jan 31 04:03:52 crc kubenswrapper[4667]: I0131 04:03:52.570222 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lqgdr"] Jan 31 04:03:52 crc kubenswrapper[4667]: I0131 04:03:52.587195 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f05ec05-e1e9-4a75-8971-9c7a716e6a6d-utilities\") pod \"community-operators-lqgdr\" (UID: \"8f05ec05-e1e9-4a75-8971-9c7a716e6a6d\") " pod="openshift-marketplace/community-operators-lqgdr" Jan 31 04:03:52 crc kubenswrapper[4667]: I0131 04:03:52.587447 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rfwb\" (UniqueName: \"kubernetes.io/projected/8f05ec05-e1e9-4a75-8971-9c7a716e6a6d-kube-api-access-6rfwb\") pod \"community-operators-lqgdr\" (UID: \"8f05ec05-e1e9-4a75-8971-9c7a716e6a6d\") " pod="openshift-marketplace/community-operators-lqgdr" Jan 31 04:03:52 crc kubenswrapper[4667]: I0131 04:03:52.587491 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f05ec05-e1e9-4a75-8971-9c7a716e6a6d-catalog-content\") pod \"community-operators-lqgdr\" (UID: \"8f05ec05-e1e9-4a75-8971-9c7a716e6a6d\") " pod="openshift-marketplace/community-operators-lqgdr" Jan 31 04:03:52 crc kubenswrapper[4667]: I0131 04:03:52.685076 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-zswlt" event={"ID":"47cf710a-e856-4094-8ef8-ff115631a236","Type":"ContainerStarted","Data":"a43822d40657eedcc8bea937bbf0436476cd7c9704fb8da9fc75ff714fc6fa42"} Jan 31 04:03:52 crc kubenswrapper[4667]: I0131 04:03:52.686397 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-zswlt" Jan 31 04:03:52 crc kubenswrapper[4667]: I0131 04:03:52.696683 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rfwb\" (UniqueName: \"kubernetes.io/projected/8f05ec05-e1e9-4a75-8971-9c7a716e6a6d-kube-api-access-6rfwb\") pod \"community-operators-lqgdr\" (UID: \"8f05ec05-e1e9-4a75-8971-9c7a716e6a6d\") " pod="openshift-marketplace/community-operators-lqgdr" Jan 31 04:03:52 crc kubenswrapper[4667]: I0131 04:03:52.696733 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f05ec05-e1e9-4a75-8971-9c7a716e6a6d-catalog-content\") pod \"community-operators-lqgdr\" (UID: \"8f05ec05-e1e9-4a75-8971-9c7a716e6a6d\") " pod="openshift-marketplace/community-operators-lqgdr" Jan 31 04:03:52 crc kubenswrapper[4667]: I0131 04:03:52.696786 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f05ec05-e1e9-4a75-8971-9c7a716e6a6d-utilities\") pod \"community-operators-lqgdr\" (UID: \"8f05ec05-e1e9-4a75-8971-9c7a716e6a6d\") " pod="openshift-marketplace/community-operators-lqgdr" Jan 31 04:03:52 crc kubenswrapper[4667]: I0131 04:03:52.697223 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f05ec05-e1e9-4a75-8971-9c7a716e6a6d-utilities\") pod \"community-operators-lqgdr\" (UID: \"8f05ec05-e1e9-4a75-8971-9c7a716e6a6d\") " pod="openshift-marketplace/community-operators-lqgdr" Jan 31 04:03:52 crc kubenswrapper[4667]: I0131 04:03:52.697828 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f05ec05-e1e9-4a75-8971-9c7a716e6a6d-catalog-content\") pod \"community-operators-lqgdr\" (UID: \"8f05ec05-e1e9-4a75-8971-9c7a716e6a6d\") " pod="openshift-marketplace/community-operators-lqgdr" Jan 31 04:03:52 crc kubenswrapper[4667]: I0131 04:03:52.704080 4667 generic.go:334] "Generic (PLEG): container finished" podID="6721fd64-d815-4fa7-8332-76eebcfad816" containerID="df9f21a1de323d01bc2373f6a2e027d300445a56c6502598037ff368fab8f4eb" exitCode=0 Jan 31 04:03:52 crc kubenswrapper[4667]: I0131 04:03:52.704167 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kcsw" event={"ID":"6721fd64-d815-4fa7-8332-76eebcfad816","Type":"ContainerDied","Data":"df9f21a1de323d01bc2373f6a2e027d300445a56c6502598037ff368fab8f4eb"} Jan 31 04:03:52 crc kubenswrapper[4667]: I0131 04:03:52.704203 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kcsw" event={"ID":"6721fd64-d815-4fa7-8332-76eebcfad816","Type":"ContainerStarted","Data":"3076f6e263c78ce87e5375e91ffe437344f3055e2b00b8e5d410675cd4e73209"} Jan 31 04:03:52 crc kubenswrapper[4667]: I0131 04:03:52.711062 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-b4btm" event={"ID":"7c71998a-5e4c-461c-96f9-3ff67b4619cd","Type":"ContainerStarted","Data":"177635e1f7fd9b900908bd2c2b634e08b97a8968691bd4252f62c86814926489"} Jan 31 04:03:52 crc kubenswrapper[4667]: I0131 04:03:52.711583 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-b4btm" Jan 31 04:03:52 crc kubenswrapper[4667]: I0131 04:03:52.720442 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lzb8l" event={"ID":"cfe9238d-7457-43f4-9933-cece048fc3fe","Type":"ContainerStarted","Data":"1c065f64cfa8e2b0e63dbde67905141db77d41fcbb85f8dcfcf9eca37089f426"} Jan 31 04:03:52 crc kubenswrapper[4667]: I0131 04:03:52.721118 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lzb8l" Jan 31 04:03:52 crc kubenswrapper[4667]: I0131 04:03:52.727490 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-x46bg" event={"ID":"fc224c93-299f-4f99-b16d-64ab47cb66a8","Type":"ContainerStarted","Data":"1d48b753d4791ee69e77332a38d4885e97bc90bd3c8a9b0169ccc815c04b7cf4"} Jan 31 04:03:52 crc kubenswrapper[4667]: I0131 04:03:52.728502 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-x46bg" Jan 31 04:03:52 crc kubenswrapper[4667]: I0131 04:03:52.733986 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d697dw" event={"ID":"ad7389f5-4d9e-4a91-89b8-8f65e425fe83","Type":"ContainerStarted","Data":"54e98773d88bb85597faf133687db24ed427a5f44eb27cc02c64ecf16d4442f7"} Jan 31 04:03:52 crc kubenswrapper[4667]: I0131 04:03:52.734563 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d697dw" Jan 31 04:03:52 crc kubenswrapper[4667]: I0131 04:03:52.784937 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rfwb\" (UniqueName: \"kubernetes.io/projected/8f05ec05-e1e9-4a75-8971-9c7a716e6a6d-kube-api-access-6rfwb\") pod \"community-operators-lqgdr\" (UID: \"8f05ec05-e1e9-4a75-8971-9c7a716e6a6d\") " pod="openshift-marketplace/community-operators-lqgdr" Jan 31 04:03:52 crc kubenswrapper[4667]: I0131 04:03:52.801964 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-zswlt" podStartSLOduration=38.103083823 podStartE2EDuration="44.801933139s" podCreationTimestamp="2026-01-31 04:03:08 +0000 UTC" firstStartedPulling="2026-01-31 04:03:45.349213136 +0000 UTC m=+948.865548435" lastFinishedPulling="2026-01-31 04:03:52.048062452 +0000 UTC m=+955.564397751" observedRunningTime="2026-01-31 04:03:52.756302062 +0000 UTC m=+956.272637351" watchObservedRunningTime="2026-01-31 04:03:52.801933139 +0000 UTC m=+956.318268438" Jan 31 04:03:52 crc kubenswrapper[4667]: I0131 04:03:52.867304 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lzb8l" podStartSLOduration=3.14654331 podStartE2EDuration="44.867284047s" podCreationTimestamp="2026-01-31 04:03:08 +0000 UTC" firstStartedPulling="2026-01-31 04:03:10.327747856 +0000 UTC m=+913.844083155" lastFinishedPulling="2026-01-31 04:03:52.048488603 +0000 UTC m=+955.564823892" observedRunningTime="2026-01-31 04:03:52.805238786 +0000 UTC m=+956.321574085" watchObservedRunningTime="2026-01-31 04:03:52.867284047 +0000 UTC m=+956.383619346" Jan 31 04:03:52 crc kubenswrapper[4667]: I0131 04:03:52.869178 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lqgdr" Jan 31 04:03:53 crc kubenswrapper[4667]: I0131 04:03:53.007920 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-x46bg" podStartSLOduration=3.8451428869999997 podStartE2EDuration="45.007889756s" podCreationTimestamp="2026-01-31 04:03:08 +0000 UTC" firstStartedPulling="2026-01-31 04:03:10.880654839 +0000 UTC m=+914.396990138" lastFinishedPulling="2026-01-31 04:03:52.043401708 +0000 UTC m=+955.559737007" observedRunningTime="2026-01-31 04:03:52.870133942 +0000 UTC m=+956.386469241" watchObservedRunningTime="2026-01-31 04:03:53.007889756 +0000 UTC m=+956.524225055" Jan 31 04:03:53 crc kubenswrapper[4667]: I0131 04:03:53.294041 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-b4btm" podStartSLOduration=7.277295397 podStartE2EDuration="45.294019463s" podCreationTimestamp="2026-01-31 04:03:08 +0000 UTC" firstStartedPulling="2026-01-31 04:03:10.880393742 +0000 UTC m=+914.396729041" lastFinishedPulling="2026-01-31 04:03:48.897117808 +0000 UTC m=+952.413453107" observedRunningTime="2026-01-31 04:03:53.123958975 +0000 UTC m=+956.640294264" watchObservedRunningTime="2026-01-31 04:03:53.294019463 +0000 UTC m=+956.810354762" Jan 31 04:03:53 crc kubenswrapper[4667]: I0131 04:03:53.297303 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d697dw" podStartSLOduration=38.378475927 podStartE2EDuration="45.2972937s" podCreationTimestamp="2026-01-31 04:03:08 +0000 UTC" firstStartedPulling="2026-01-31 04:03:45.10851875 +0000 UTC m=+948.624854039" lastFinishedPulling="2026-01-31 04:03:52.027336513 +0000 UTC m=+955.543671812" observedRunningTime="2026-01-31 04:03:53.249418113 +0000 UTC m=+956.765753412" watchObservedRunningTime="2026-01-31 04:03:53.2972937 +0000 UTC m=+956.813628989" Jan 31 04:03:53 crc kubenswrapper[4667]: I0131 04:03:53.598173 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lqgdr"] Jan 31 04:03:53 crc kubenswrapper[4667]: I0131 04:03:53.744602 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lqgdr" event={"ID":"8f05ec05-e1e9-4a75-8971-9c7a716e6a6d","Type":"ContainerStarted","Data":"99d1dcc4b4b0d085330490b92ccc3c526a1d927baec989b00f60b3c3c97e818e"} Jan 31 04:03:54 crc kubenswrapper[4667]: E0131 04:03:54.284831 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-gzj6r" podUID="340a909d-7419-4721-be11-2c37a3a87022" Jan 31 04:03:54 crc kubenswrapper[4667]: I0131 04:03:54.754213 4667 generic.go:334] "Generic (PLEG): container finished" podID="8f05ec05-e1e9-4a75-8971-9c7a716e6a6d" containerID="7dd13f374c24ae28d9ecbce4574c968d58377e0bceb49f69e4bf882089140f89" exitCode=0 Jan 31 04:03:54 crc kubenswrapper[4667]: I0131 04:03:54.754301 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lqgdr" event={"ID":"8f05ec05-e1e9-4a75-8971-9c7a716e6a6d","Type":"ContainerDied","Data":"7dd13f374c24ae28d9ecbce4574c968d58377e0bceb49f69e4bf882089140f89"} Jan 31 04:03:55 crc kubenswrapper[4667]: I0131 04:03:55.767437 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lqgdr" event={"ID":"8f05ec05-e1e9-4a75-8971-9c7a716e6a6d","Type":"ContainerStarted","Data":"c2894bec6f1b3207f875065c5835c73004ba74e0b401f3a90d591624c1c0e9eb"} Jan 31 04:03:56 crc kubenswrapper[4667]: I0131 04:03:56.785813 4667 generic.go:334] "Generic (PLEG): container finished" podID="8f05ec05-e1e9-4a75-8971-9c7a716e6a6d" containerID="c2894bec6f1b3207f875065c5835c73004ba74e0b401f3a90d591624c1c0e9eb" exitCode=0 Jan 31 04:03:56 crc kubenswrapper[4667]: I0131 04:03:56.785913 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lqgdr" event={"ID":"8f05ec05-e1e9-4a75-8971-9c7a716e6a6d","Type":"ContainerDied","Data":"c2894bec6f1b3207f875065c5835c73004ba74e0b401f3a90d591624c1c0e9eb"} Jan 31 04:03:57 crc kubenswrapper[4667]: I0131 04:03:57.355768 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rm9zq"] Jan 31 04:03:57 crc kubenswrapper[4667]: I0131 04:03:57.357927 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rm9zq" Jan 31 04:03:57 crc kubenswrapper[4667]: I0131 04:03:57.374226 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rm9zq"] Jan 31 04:03:57 crc kubenswrapper[4667]: I0131 04:03:57.412198 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrklx\" (UniqueName: \"kubernetes.io/projected/e05e4eb8-23a2-4867-af5d-ad1fe7502683-kube-api-access-zrklx\") pod \"redhat-marketplace-rm9zq\" (UID: \"e05e4eb8-23a2-4867-af5d-ad1fe7502683\") " pod="openshift-marketplace/redhat-marketplace-rm9zq" Jan 31 04:03:57 crc kubenswrapper[4667]: I0131 04:03:57.412292 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e05e4eb8-23a2-4867-af5d-ad1fe7502683-utilities\") pod \"redhat-marketplace-rm9zq\" (UID: \"e05e4eb8-23a2-4867-af5d-ad1fe7502683\") " pod="openshift-marketplace/redhat-marketplace-rm9zq" Jan 31 04:03:57 crc kubenswrapper[4667]: I0131 04:03:57.412314 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e05e4eb8-23a2-4867-af5d-ad1fe7502683-catalog-content\") pod \"redhat-marketplace-rm9zq\" (UID: \"e05e4eb8-23a2-4867-af5d-ad1fe7502683\") " pod="openshift-marketplace/redhat-marketplace-rm9zq" Jan 31 04:03:57 crc kubenswrapper[4667]: I0131 04:03:57.514459 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrklx\" (UniqueName: \"kubernetes.io/projected/e05e4eb8-23a2-4867-af5d-ad1fe7502683-kube-api-access-zrklx\") pod \"redhat-marketplace-rm9zq\" (UID: \"e05e4eb8-23a2-4867-af5d-ad1fe7502683\") " pod="openshift-marketplace/redhat-marketplace-rm9zq" Jan 31 04:03:57 crc kubenswrapper[4667]: I0131 04:03:57.514576 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e05e4eb8-23a2-4867-af5d-ad1fe7502683-utilities\") pod \"redhat-marketplace-rm9zq\" (UID: \"e05e4eb8-23a2-4867-af5d-ad1fe7502683\") " pod="openshift-marketplace/redhat-marketplace-rm9zq" Jan 31 04:03:57 crc kubenswrapper[4667]: I0131 04:03:57.514601 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e05e4eb8-23a2-4867-af5d-ad1fe7502683-catalog-content\") pod \"redhat-marketplace-rm9zq\" (UID: \"e05e4eb8-23a2-4867-af5d-ad1fe7502683\") " pod="openshift-marketplace/redhat-marketplace-rm9zq" Jan 31 04:03:57 crc kubenswrapper[4667]: I0131 04:03:57.515223 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e05e4eb8-23a2-4867-af5d-ad1fe7502683-catalog-content\") pod \"redhat-marketplace-rm9zq\" (UID: \"e05e4eb8-23a2-4867-af5d-ad1fe7502683\") " pod="openshift-marketplace/redhat-marketplace-rm9zq" Jan 31 04:03:57 crc kubenswrapper[4667]: I0131 04:03:57.515880 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e05e4eb8-23a2-4867-af5d-ad1fe7502683-utilities\") pod \"redhat-marketplace-rm9zq\" (UID: \"e05e4eb8-23a2-4867-af5d-ad1fe7502683\") " pod="openshift-marketplace/redhat-marketplace-rm9zq" Jan 31 04:03:57 crc kubenswrapper[4667]: I0131 04:03:57.538608 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrklx\" (UniqueName: \"kubernetes.io/projected/e05e4eb8-23a2-4867-af5d-ad1fe7502683-kube-api-access-zrklx\") pod \"redhat-marketplace-rm9zq\" (UID: \"e05e4eb8-23a2-4867-af5d-ad1fe7502683\") " pod="openshift-marketplace/redhat-marketplace-rm9zq" Jan 31 04:03:57 crc kubenswrapper[4667]: I0131 04:03:57.696951 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rm9zq" Jan 31 04:03:58 crc kubenswrapper[4667]: I0131 04:03:58.562205 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-2xtdf" Jan 31 04:03:58 crc kubenswrapper[4667]: I0131 04:03:58.622003 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-54fc54694b-t88kx" Jan 31 04:03:58 crc kubenswrapper[4667]: I0131 04:03:58.671209 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lzb8l" Jan 31 04:03:58 crc kubenswrapper[4667]: I0131 04:03:58.706755 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-rfpnc" Jan 31 04:03:58 crc kubenswrapper[4667]: I0131 04:03:58.784084 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-hxstt" Jan 31 04:03:58 crc kubenswrapper[4667]: I0131 04:03:58.860518 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-j629c" Jan 31 04:03:58 crc kubenswrapper[4667]: I0131 04:03:58.876444 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-w6vd6" Jan 31 04:03:58 crc kubenswrapper[4667]: I0131 04:03:58.876999 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-6cf9z" Jan 31 04:03:58 crc kubenswrapper[4667]: I0131 04:03:58.878125 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-pqxkg" Jan 31 04:03:58 crc kubenswrapper[4667]: I0131 04:03:58.971393 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-wmmkk" Jan 31 04:03:59 crc kubenswrapper[4667]: I0131 04:03:59.063042 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-zq7nc" Jan 31 04:03:59 crc kubenswrapper[4667]: I0131 04:03:59.208345 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-vpt7r" Jan 31 04:03:59 crc kubenswrapper[4667]: E0131 04:03:59.291479 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-964b5" podUID="1f3ad0ee-dce4-4ed0-90f7-e2c195b6d099" Jan 31 04:03:59 crc kubenswrapper[4667]: I0131 04:03:59.307951 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-2zj6j" Jan 31 04:03:59 crc kubenswrapper[4667]: I0131 04:03:59.440131 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-lgk8x" Jan 31 04:03:59 crc kubenswrapper[4667]: I0131 04:03:59.496217 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-x46bg" Jan 31 04:03:59 crc kubenswrapper[4667]: I0131 04:03:59.529075 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-b4btm" Jan 31 04:03:59 crc kubenswrapper[4667]: I0131 04:03:59.804199 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-fxzcm" Jan 31 04:04:00 crc kubenswrapper[4667]: I0131 04:04:00.523945 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-zswlt" Jan 31 04:04:00 crc kubenswrapper[4667]: I0131 04:04:00.708967 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rm9zq"] Jan 31 04:04:00 crc kubenswrapper[4667]: I0131 04:04:00.838171 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rm9zq" event={"ID":"e05e4eb8-23a2-4867-af5d-ad1fe7502683","Type":"ContainerStarted","Data":"741eeb114690e193d8687043f8d7534c62ca1dec00a7c311fcbd1e85cd3ae06e"} Jan 31 04:04:00 crc kubenswrapper[4667]: I0131 04:04:00.845211 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kcsw" event={"ID":"6721fd64-d815-4fa7-8332-76eebcfad816","Type":"ContainerStarted","Data":"fa074e0338adbdb3ef083f90417e106d031d0997cfe901df465959f96f21fd9f"} Jan 31 04:04:00 crc kubenswrapper[4667]: I0131 04:04:00.977732 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d697dw" Jan 31 04:04:01 crc kubenswrapper[4667]: I0131 04:04:01.855221 4667 generic.go:334] "Generic (PLEG): container finished" podID="6721fd64-d815-4fa7-8332-76eebcfad816" containerID="fa074e0338adbdb3ef083f90417e106d031d0997cfe901df465959f96f21fd9f" exitCode=0 Jan 31 04:04:01 crc kubenswrapper[4667]: I0131 04:04:01.855324 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kcsw" event={"ID":"6721fd64-d815-4fa7-8332-76eebcfad816","Type":"ContainerDied","Data":"fa074e0338adbdb3ef083f90417e106d031d0997cfe901df465959f96f21fd9f"} Jan 31 04:04:01 crc kubenswrapper[4667]: I0131 04:04:01.857703 4667 generic.go:334] "Generic (PLEG): container finished" podID="e05e4eb8-23a2-4867-af5d-ad1fe7502683" containerID="733e5e3d6ec63b1c35f472157ea7b95ea252d0d525dca4b2dc389c81d5f803b0" exitCode=0 Jan 31 04:04:01 crc kubenswrapper[4667]: I0131 04:04:01.857752 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rm9zq" event={"ID":"e05e4eb8-23a2-4867-af5d-ad1fe7502683","Type":"ContainerDied","Data":"733e5e3d6ec63b1c35f472157ea7b95ea252d0d525dca4b2dc389c81d5f803b0"} Jan 31 04:04:01 crc kubenswrapper[4667]: I0131 04:04:01.863215 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lqgdr" event={"ID":"8f05ec05-e1e9-4a75-8971-9c7a716e6a6d","Type":"ContainerStarted","Data":"29271f04d2a8f11762c3892dd757a7d0c49f373821942c3826b909c5d537d68b"} Jan 31 04:04:01 crc kubenswrapper[4667]: I0131 04:04:01.924632 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lqgdr" podStartSLOduration=4.095191867 podStartE2EDuration="9.924606588s" podCreationTimestamp="2026-01-31 04:03:52 +0000 UTC" firstStartedPulling="2026-01-31 04:03:54.756764129 +0000 UTC m=+958.273099428" lastFinishedPulling="2026-01-31 04:04:00.58617885 +0000 UTC m=+964.102514149" observedRunningTime="2026-01-31 04:04:01.921337362 +0000 UTC m=+965.437672671" watchObservedRunningTime="2026-01-31 04:04:01.924606588 +0000 UTC m=+965.440941887" Jan 31 04:04:02 crc kubenswrapper[4667]: I0131 04:04:02.871045 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lqgdr" Jan 31 04:04:02 crc kubenswrapper[4667]: I0131 04:04:02.871444 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lqgdr" Jan 31 04:04:02 crc kubenswrapper[4667]: I0131 04:04:02.894280 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kcsw" event={"ID":"6721fd64-d815-4fa7-8332-76eebcfad816","Type":"ContainerStarted","Data":"87474a4153a405a9fc226e30c58e6ee7ec41fb4f4bf34d64057838f6a269941e"} Jan 31 04:04:02 crc kubenswrapper[4667]: I0131 04:04:02.918711 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8kcsw" podStartSLOduration=5.230190215 podStartE2EDuration="14.918682079s" podCreationTimestamp="2026-01-31 04:03:48 +0000 UTC" firstStartedPulling="2026-01-31 04:03:52.711051595 +0000 UTC m=+956.227386894" lastFinishedPulling="2026-01-31 04:04:02.399543459 +0000 UTC m=+965.915878758" observedRunningTime="2026-01-31 04:04:02.914005685 +0000 UTC m=+966.430340984" watchObservedRunningTime="2026-01-31 04:04:02.918682079 +0000 UTC m=+966.435017378" Jan 31 04:04:02 crc kubenswrapper[4667]: I0131 04:04:02.944206 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lqgdr" Jan 31 04:04:03 crc kubenswrapper[4667]: I0131 04:04:03.901623 4667 generic.go:334] "Generic (PLEG): container finished" podID="e05e4eb8-23a2-4867-af5d-ad1fe7502683" containerID="99c17f4e21e2850b37d34b5caf362423742aa5ceb0be449f40e815b35b2cce5e" exitCode=0 Jan 31 04:04:03 crc kubenswrapper[4667]: I0131 04:04:03.901714 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rm9zq" event={"ID":"e05e4eb8-23a2-4867-af5d-ad1fe7502683","Type":"ContainerDied","Data":"99c17f4e21e2850b37d34b5caf362423742aa5ceb0be449f40e815b35b2cce5e"} Jan 31 04:04:04 crc kubenswrapper[4667]: I0131 04:04:04.913012 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rm9zq" event={"ID":"e05e4eb8-23a2-4867-af5d-ad1fe7502683","Type":"ContainerStarted","Data":"65a93a832f11842b00e19ff5219ddd7313e7458eeef1dadd61eae32b8204f471"} Jan 31 04:04:04 crc kubenswrapper[4667]: I0131 04:04:04.942609 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rm9zq" podStartSLOduration=5.530019329 podStartE2EDuration="7.942583655s" podCreationTimestamp="2026-01-31 04:03:57 +0000 UTC" firstStartedPulling="2026-01-31 04:04:01.860627216 +0000 UTC m=+965.376962515" lastFinishedPulling="2026-01-31 04:04:04.273191522 +0000 UTC m=+967.789526841" observedRunningTime="2026-01-31 04:04:04.936087214 +0000 UTC m=+968.452422533" watchObservedRunningTime="2026-01-31 04:04:04.942583655 +0000 UTC m=+968.458918974" Jan 31 04:04:06 crc kubenswrapper[4667]: I0131 04:04:06.927462 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-gzj6r" event={"ID":"340a909d-7419-4721-be11-2c37a3a87022","Type":"ContainerStarted","Data":"5a5580f35a4a373ca4b9397bda58b298857a524cf65ea712a2fe24489930f3df"} Jan 31 04:04:06 crc kubenswrapper[4667]: I0131 04:04:06.927925 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-gzj6r" Jan 31 04:04:06 crc kubenswrapper[4667]: I0131 04:04:06.948904 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-gzj6r" podStartSLOduration=3.079016515 podStartE2EDuration="58.948886007s" podCreationTimestamp="2026-01-31 04:03:08 +0000 UTC" firstStartedPulling="2026-01-31 04:03:10.888048695 +0000 UTC m=+914.404383994" lastFinishedPulling="2026-01-31 04:04:06.757918167 +0000 UTC m=+970.274253486" observedRunningTime="2026-01-31 04:04:06.942251552 +0000 UTC m=+970.458586851" watchObservedRunningTime="2026-01-31 04:04:06.948886007 +0000 UTC m=+970.465221306" Jan 31 04:04:07 crc kubenswrapper[4667]: I0131 04:04:07.697819 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rm9zq" Jan 31 04:04:07 crc kubenswrapper[4667]: I0131 04:04:07.698170 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rm9zq" Jan 31 04:04:07 crc kubenswrapper[4667]: I0131 04:04:07.748402 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rm9zq" Jan 31 04:04:08 crc kubenswrapper[4667]: I0131 04:04:08.439514 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8kcsw" Jan 31 04:04:08 crc kubenswrapper[4667]: I0131 04:04:08.439622 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8kcsw" Jan 31 04:04:08 crc kubenswrapper[4667]: I0131 04:04:08.515779 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8kcsw" Jan 31 04:04:09 crc kubenswrapper[4667]: I0131 04:04:09.006165 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8kcsw" Jan 31 04:04:10 crc kubenswrapper[4667]: I0131 04:04:10.755809 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8kcsw"] Jan 31 04:04:11 crc kubenswrapper[4667]: I0131 04:04:11.131018 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4bdlg"] Jan 31 04:04:11 crc kubenswrapper[4667]: I0131 04:04:11.131594 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4bdlg" podUID="0b24b83c-995f-4a6f-a567-0ce5c6cbd210" containerName="registry-server" containerID="cri-o://2892d343d0c49caf01bce5fc49b676e656fd01119ac4be0c7f57ab9009a2dbb0" gracePeriod=2 Jan 31 04:04:11 crc kubenswrapper[4667]: I0131 04:04:11.977279 4667 generic.go:334] "Generic (PLEG): container finished" podID="0b24b83c-995f-4a6f-a567-0ce5c6cbd210" containerID="2892d343d0c49caf01bce5fc49b676e656fd01119ac4be0c7f57ab9009a2dbb0" exitCode=0 Jan 31 04:04:11 crc kubenswrapper[4667]: I0131 04:04:11.977382 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bdlg" event={"ID":"0b24b83c-995f-4a6f-a567-0ce5c6cbd210","Type":"ContainerDied","Data":"2892d343d0c49caf01bce5fc49b676e656fd01119ac4be0c7f57ab9009a2dbb0"} Jan 31 04:04:12 crc kubenswrapper[4667]: I0131 04:04:12.237421 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4bdlg" Jan 31 04:04:12 crc kubenswrapper[4667]: I0131 04:04:12.374123 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b24b83c-995f-4a6f-a567-0ce5c6cbd210-catalog-content\") pod \"0b24b83c-995f-4a6f-a567-0ce5c6cbd210\" (UID: \"0b24b83c-995f-4a6f-a567-0ce5c6cbd210\") " Jan 31 04:04:12 crc kubenswrapper[4667]: I0131 04:04:12.374486 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b24b83c-995f-4a6f-a567-0ce5c6cbd210-utilities\") pod \"0b24b83c-995f-4a6f-a567-0ce5c6cbd210\" (UID: \"0b24b83c-995f-4a6f-a567-0ce5c6cbd210\") " Jan 31 04:04:12 crc kubenswrapper[4667]: I0131 04:04:12.374589 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd7np\" (UniqueName: \"kubernetes.io/projected/0b24b83c-995f-4a6f-a567-0ce5c6cbd210-kube-api-access-gd7np\") pod \"0b24b83c-995f-4a6f-a567-0ce5c6cbd210\" (UID: \"0b24b83c-995f-4a6f-a567-0ce5c6cbd210\") " Jan 31 04:04:12 crc kubenswrapper[4667]: I0131 04:04:12.375725 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b24b83c-995f-4a6f-a567-0ce5c6cbd210-utilities" (OuterVolumeSpecName: "utilities") pod "0b24b83c-995f-4a6f-a567-0ce5c6cbd210" (UID: "0b24b83c-995f-4a6f-a567-0ce5c6cbd210"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:04:12 crc kubenswrapper[4667]: I0131 04:04:12.412064 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b24b83c-995f-4a6f-a567-0ce5c6cbd210-kube-api-access-gd7np" (OuterVolumeSpecName: "kube-api-access-gd7np") pod "0b24b83c-995f-4a6f-a567-0ce5c6cbd210" (UID: "0b24b83c-995f-4a6f-a567-0ce5c6cbd210"). InnerVolumeSpecName "kube-api-access-gd7np". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:04:12 crc kubenswrapper[4667]: I0131 04:04:12.433719 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b24b83c-995f-4a6f-a567-0ce5c6cbd210-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0b24b83c-995f-4a6f-a567-0ce5c6cbd210" (UID: "0b24b83c-995f-4a6f-a567-0ce5c6cbd210"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:04:12 crc kubenswrapper[4667]: I0131 04:04:12.483409 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd7np\" (UniqueName: \"kubernetes.io/projected/0b24b83c-995f-4a6f-a567-0ce5c6cbd210-kube-api-access-gd7np\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:12 crc kubenswrapper[4667]: I0131 04:04:12.483455 4667 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b24b83c-995f-4a6f-a567-0ce5c6cbd210-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:12 crc kubenswrapper[4667]: I0131 04:04:12.483470 4667 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b24b83c-995f-4a6f-a567-0ce5c6cbd210-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:12 crc kubenswrapper[4667]: I0131 04:04:12.926192 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lqgdr" Jan 31 04:04:12 crc kubenswrapper[4667]: I0131 04:04:12.988455 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bdlg" event={"ID":"0b24b83c-995f-4a6f-a567-0ce5c6cbd210","Type":"ContainerDied","Data":"717c07ffa6e0ff7a2821b1c25df1c3ba890d590da3a463e6ce4b96663c25d9fc"} Jan 31 04:04:12 crc kubenswrapper[4667]: I0131 04:04:12.988539 4667 scope.go:117] "RemoveContainer" containerID="2892d343d0c49caf01bce5fc49b676e656fd01119ac4be0c7f57ab9009a2dbb0" Jan 31 04:04:12 crc kubenswrapper[4667]: I0131 04:04:12.988568 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4bdlg" Jan 31 04:04:13 crc kubenswrapper[4667]: I0131 04:04:13.012110 4667 scope.go:117] "RemoveContainer" containerID="642a802c9c34b25b12c5473dd8f538ed5f2c99bcde3e1d82cf6575c3179ab63e" Jan 31 04:04:13 crc kubenswrapper[4667]: I0131 04:04:13.023566 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4bdlg"] Jan 31 04:04:13 crc kubenswrapper[4667]: I0131 04:04:13.033218 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4bdlg"] Jan 31 04:04:13 crc kubenswrapper[4667]: I0131 04:04:13.039262 4667 scope.go:117] "RemoveContainer" containerID="649f1de49e96cd20ae5ba92f05cceae24d632743e2eeb2817c084f0cd55484a3" Jan 31 04:04:13 crc kubenswrapper[4667]: I0131 04:04:13.283911 4667 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 04:04:13 crc kubenswrapper[4667]: I0131 04:04:13.291858 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b24b83c-995f-4a6f-a567-0ce5c6cbd210" path="/var/lib/kubelet/pods/0b24b83c-995f-4a6f-a567-0ce5c6cbd210/volumes" Jan 31 04:04:13 crc kubenswrapper[4667]: I0131 04:04:13.998668 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-964b5" event={"ID":"1f3ad0ee-dce4-4ed0-90f7-e2c195b6d099","Type":"ContainerStarted","Data":"b9efbeef345a67bf2834d2bdb8ebee885480c17a733855b09c006ba3e175c354"} Jan 31 04:04:14 crc kubenswrapper[4667]: I0131 04:04:14.019579 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-964b5" podStartSLOduration=2.131782756 podStartE2EDuration="1m5.019555716s" podCreationTimestamp="2026-01-31 04:03:09 +0000 UTC" firstStartedPulling="2026-01-31 04:03:10.892194154 +0000 UTC m=+914.408529453" lastFinishedPulling="2026-01-31 04:04:13.779967094 +0000 UTC m=+977.296302413" observedRunningTime="2026-01-31 04:04:14.013263989 +0000 UTC m=+977.529599288" watchObservedRunningTime="2026-01-31 04:04:14.019555716 +0000 UTC m=+977.535891015" Jan 31 04:04:15 crc kubenswrapper[4667]: I0131 04:04:15.320122 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lqgdr"] Jan 31 04:04:15 crc kubenswrapper[4667]: I0131 04:04:15.320419 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lqgdr" podUID="8f05ec05-e1e9-4a75-8971-9c7a716e6a6d" containerName="registry-server" containerID="cri-o://29271f04d2a8f11762c3892dd757a7d0c49f373821942c3826b909c5d537d68b" gracePeriod=2 Jan 31 04:04:15 crc kubenswrapper[4667]: I0131 04:04:15.774811 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lqgdr" Jan 31 04:04:15 crc kubenswrapper[4667]: I0131 04:04:15.861770 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rfwb\" (UniqueName: \"kubernetes.io/projected/8f05ec05-e1e9-4a75-8971-9c7a716e6a6d-kube-api-access-6rfwb\") pod \"8f05ec05-e1e9-4a75-8971-9c7a716e6a6d\" (UID: \"8f05ec05-e1e9-4a75-8971-9c7a716e6a6d\") " Jan 31 04:04:15 crc kubenswrapper[4667]: I0131 04:04:15.861870 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f05ec05-e1e9-4a75-8971-9c7a716e6a6d-catalog-content\") pod \"8f05ec05-e1e9-4a75-8971-9c7a716e6a6d\" (UID: \"8f05ec05-e1e9-4a75-8971-9c7a716e6a6d\") " Jan 31 04:04:15 crc kubenswrapper[4667]: I0131 04:04:15.861923 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f05ec05-e1e9-4a75-8971-9c7a716e6a6d-utilities\") pod \"8f05ec05-e1e9-4a75-8971-9c7a716e6a6d\" (UID: \"8f05ec05-e1e9-4a75-8971-9c7a716e6a6d\") " Jan 31 04:04:15 crc kubenswrapper[4667]: I0131 04:04:15.862606 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f05ec05-e1e9-4a75-8971-9c7a716e6a6d-utilities" (OuterVolumeSpecName: "utilities") pod "8f05ec05-e1e9-4a75-8971-9c7a716e6a6d" (UID: "8f05ec05-e1e9-4a75-8971-9c7a716e6a6d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:04:15 crc kubenswrapper[4667]: I0131 04:04:15.868502 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f05ec05-e1e9-4a75-8971-9c7a716e6a6d-kube-api-access-6rfwb" (OuterVolumeSpecName: "kube-api-access-6rfwb") pod "8f05ec05-e1e9-4a75-8971-9c7a716e6a6d" (UID: "8f05ec05-e1e9-4a75-8971-9c7a716e6a6d"). InnerVolumeSpecName "kube-api-access-6rfwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:04:15 crc kubenswrapper[4667]: I0131 04:04:15.920995 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f05ec05-e1e9-4a75-8971-9c7a716e6a6d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8f05ec05-e1e9-4a75-8971-9c7a716e6a6d" (UID: "8f05ec05-e1e9-4a75-8971-9c7a716e6a6d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:04:15 crc kubenswrapper[4667]: I0131 04:04:15.963047 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rfwb\" (UniqueName: \"kubernetes.io/projected/8f05ec05-e1e9-4a75-8971-9c7a716e6a6d-kube-api-access-6rfwb\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:15 crc kubenswrapper[4667]: I0131 04:04:15.963091 4667 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f05ec05-e1e9-4a75-8971-9c7a716e6a6d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:15 crc kubenswrapper[4667]: I0131 04:04:15.963101 4667 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f05ec05-e1e9-4a75-8971-9c7a716e6a6d-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:16 crc kubenswrapper[4667]: I0131 04:04:16.012793 4667 generic.go:334] "Generic (PLEG): container finished" podID="8f05ec05-e1e9-4a75-8971-9c7a716e6a6d" containerID="29271f04d2a8f11762c3892dd757a7d0c49f373821942c3826b909c5d537d68b" exitCode=0 Jan 31 04:04:16 crc kubenswrapper[4667]: I0131 04:04:16.012868 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lqgdr" event={"ID":"8f05ec05-e1e9-4a75-8971-9c7a716e6a6d","Type":"ContainerDied","Data":"29271f04d2a8f11762c3892dd757a7d0c49f373821942c3826b909c5d537d68b"} Jan 31 04:04:16 crc kubenswrapper[4667]: I0131 04:04:16.012954 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lqgdr" event={"ID":"8f05ec05-e1e9-4a75-8971-9c7a716e6a6d","Type":"ContainerDied","Data":"99d1dcc4b4b0d085330490b92ccc3c526a1d927baec989b00f60b3c3c97e818e"} Jan 31 04:04:16 crc kubenswrapper[4667]: I0131 04:04:16.012980 4667 scope.go:117] "RemoveContainer" containerID="29271f04d2a8f11762c3892dd757a7d0c49f373821942c3826b909c5d537d68b" Jan 31 04:04:16 crc kubenswrapper[4667]: I0131 04:04:16.012895 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lqgdr" Jan 31 04:04:16 crc kubenswrapper[4667]: I0131 04:04:16.032479 4667 scope.go:117] "RemoveContainer" containerID="c2894bec6f1b3207f875065c5835c73004ba74e0b401f3a90d591624c1c0e9eb" Jan 31 04:04:16 crc kubenswrapper[4667]: I0131 04:04:16.055067 4667 scope.go:117] "RemoveContainer" containerID="7dd13f374c24ae28d9ecbce4574c968d58377e0bceb49f69e4bf882089140f89" Jan 31 04:04:16 crc kubenswrapper[4667]: I0131 04:04:16.066958 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lqgdr"] Jan 31 04:04:16 crc kubenswrapper[4667]: I0131 04:04:16.074917 4667 scope.go:117] "RemoveContainer" containerID="29271f04d2a8f11762c3892dd757a7d0c49f373821942c3826b909c5d537d68b" Jan 31 04:04:16 crc kubenswrapper[4667]: E0131 04:04:16.075471 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29271f04d2a8f11762c3892dd757a7d0c49f373821942c3826b909c5d537d68b\": container with ID starting with 29271f04d2a8f11762c3892dd757a7d0c49f373821942c3826b909c5d537d68b not found: ID does not exist" containerID="29271f04d2a8f11762c3892dd757a7d0c49f373821942c3826b909c5d537d68b" Jan 31 04:04:16 crc kubenswrapper[4667]: I0131 04:04:16.075522 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29271f04d2a8f11762c3892dd757a7d0c49f373821942c3826b909c5d537d68b"} err="failed to get container status \"29271f04d2a8f11762c3892dd757a7d0c49f373821942c3826b909c5d537d68b\": rpc error: code = NotFound desc = could not find container \"29271f04d2a8f11762c3892dd757a7d0c49f373821942c3826b909c5d537d68b\": container with ID starting with 29271f04d2a8f11762c3892dd757a7d0c49f373821942c3826b909c5d537d68b not found: ID does not exist" Jan 31 04:04:16 crc kubenswrapper[4667]: I0131 04:04:16.075555 4667 scope.go:117] "RemoveContainer" containerID="c2894bec6f1b3207f875065c5835c73004ba74e0b401f3a90d591624c1c0e9eb" Jan 31 04:04:16 crc kubenswrapper[4667]: E0131 04:04:16.076034 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2894bec6f1b3207f875065c5835c73004ba74e0b401f3a90d591624c1c0e9eb\": container with ID starting with c2894bec6f1b3207f875065c5835c73004ba74e0b401f3a90d591624c1c0e9eb not found: ID does not exist" containerID="c2894bec6f1b3207f875065c5835c73004ba74e0b401f3a90d591624c1c0e9eb" Jan 31 04:04:16 crc kubenswrapper[4667]: I0131 04:04:16.076075 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2894bec6f1b3207f875065c5835c73004ba74e0b401f3a90d591624c1c0e9eb"} err="failed to get container status \"c2894bec6f1b3207f875065c5835c73004ba74e0b401f3a90d591624c1c0e9eb\": rpc error: code = NotFound desc = could not find container \"c2894bec6f1b3207f875065c5835c73004ba74e0b401f3a90d591624c1c0e9eb\": container with ID starting with c2894bec6f1b3207f875065c5835c73004ba74e0b401f3a90d591624c1c0e9eb not found: ID does not exist" Jan 31 04:04:16 crc kubenswrapper[4667]: I0131 04:04:16.076103 4667 scope.go:117] "RemoveContainer" containerID="7dd13f374c24ae28d9ecbce4574c968d58377e0bceb49f69e4bf882089140f89" Jan 31 04:04:16 crc kubenswrapper[4667]: E0131 04:04:16.076407 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dd13f374c24ae28d9ecbce4574c968d58377e0bceb49f69e4bf882089140f89\": container with ID starting with 7dd13f374c24ae28d9ecbce4574c968d58377e0bceb49f69e4bf882089140f89 not found: ID does not exist" containerID="7dd13f374c24ae28d9ecbce4574c968d58377e0bceb49f69e4bf882089140f89" Jan 31 04:04:16 crc kubenswrapper[4667]: I0131 04:04:16.076446 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dd13f374c24ae28d9ecbce4574c968d58377e0bceb49f69e4bf882089140f89"} err="failed to get container status \"7dd13f374c24ae28d9ecbce4574c968d58377e0bceb49f69e4bf882089140f89\": rpc error: code = NotFound desc = could not find container \"7dd13f374c24ae28d9ecbce4574c968d58377e0bceb49f69e4bf882089140f89\": container with ID starting with 7dd13f374c24ae28d9ecbce4574c968d58377e0bceb49f69e4bf882089140f89 not found: ID does not exist" Jan 31 04:04:16 crc kubenswrapper[4667]: I0131 04:04:16.084902 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lqgdr"] Jan 31 04:04:17 crc kubenswrapper[4667]: I0131 04:04:17.296790 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f05ec05-e1e9-4a75-8971-9c7a716e6a6d" path="/var/lib/kubelet/pods/8f05ec05-e1e9-4a75-8971-9c7a716e6a6d/volumes" Jan 31 04:04:17 crc kubenswrapper[4667]: I0131 04:04:17.736093 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rm9zq" Jan 31 04:04:19 crc kubenswrapper[4667]: I0131 04:04:19.659407 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-gzj6r" Jan 31 04:04:19 crc kubenswrapper[4667]: I0131 04:04:19.725561 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rm9zq"] Jan 31 04:04:19 crc kubenswrapper[4667]: I0131 04:04:19.726394 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rm9zq" podUID="e05e4eb8-23a2-4867-af5d-ad1fe7502683" containerName="registry-server" containerID="cri-o://65a93a832f11842b00e19ff5219ddd7313e7458eeef1dadd61eae32b8204f471" gracePeriod=2 Jan 31 04:04:20 crc kubenswrapper[4667]: I0131 04:04:20.797027 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rm9zq" Jan 31 04:04:20 crc kubenswrapper[4667]: I0131 04:04:20.939683 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e05e4eb8-23a2-4867-af5d-ad1fe7502683-utilities\") pod \"e05e4eb8-23a2-4867-af5d-ad1fe7502683\" (UID: \"e05e4eb8-23a2-4867-af5d-ad1fe7502683\") " Jan 31 04:04:20 crc kubenswrapper[4667]: I0131 04:04:20.939765 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrklx\" (UniqueName: \"kubernetes.io/projected/e05e4eb8-23a2-4867-af5d-ad1fe7502683-kube-api-access-zrklx\") pod \"e05e4eb8-23a2-4867-af5d-ad1fe7502683\" (UID: \"e05e4eb8-23a2-4867-af5d-ad1fe7502683\") " Jan 31 04:04:20 crc kubenswrapper[4667]: I0131 04:04:20.939950 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e05e4eb8-23a2-4867-af5d-ad1fe7502683-catalog-content\") pod \"e05e4eb8-23a2-4867-af5d-ad1fe7502683\" (UID: \"e05e4eb8-23a2-4867-af5d-ad1fe7502683\") " Jan 31 04:04:20 crc kubenswrapper[4667]: I0131 04:04:20.940678 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e05e4eb8-23a2-4867-af5d-ad1fe7502683-utilities" (OuterVolumeSpecName: "utilities") pod "e05e4eb8-23a2-4867-af5d-ad1fe7502683" (UID: "e05e4eb8-23a2-4867-af5d-ad1fe7502683"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:04:20 crc kubenswrapper[4667]: I0131 04:04:20.948978 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e05e4eb8-23a2-4867-af5d-ad1fe7502683-kube-api-access-zrklx" (OuterVolumeSpecName: "kube-api-access-zrklx") pod "e05e4eb8-23a2-4867-af5d-ad1fe7502683" (UID: "e05e4eb8-23a2-4867-af5d-ad1fe7502683"). InnerVolumeSpecName "kube-api-access-zrklx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:04:20 crc kubenswrapper[4667]: I0131 04:04:20.962953 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e05e4eb8-23a2-4867-af5d-ad1fe7502683-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e05e4eb8-23a2-4867-af5d-ad1fe7502683" (UID: "e05e4eb8-23a2-4867-af5d-ad1fe7502683"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:04:21 crc kubenswrapper[4667]: I0131 04:04:21.041583 4667 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e05e4eb8-23a2-4867-af5d-ad1fe7502683-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:21 crc kubenswrapper[4667]: I0131 04:04:21.041625 4667 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e05e4eb8-23a2-4867-af5d-ad1fe7502683-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:21 crc kubenswrapper[4667]: I0131 04:04:21.041635 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrklx\" (UniqueName: \"kubernetes.io/projected/e05e4eb8-23a2-4867-af5d-ad1fe7502683-kube-api-access-zrklx\") on node \"crc\" DevicePath \"\"" Jan 31 04:04:21 crc kubenswrapper[4667]: I0131 04:04:21.062657 4667 generic.go:334] "Generic (PLEG): container finished" podID="e05e4eb8-23a2-4867-af5d-ad1fe7502683" containerID="65a93a832f11842b00e19ff5219ddd7313e7458eeef1dadd61eae32b8204f471" exitCode=0 Jan 31 04:04:21 crc kubenswrapper[4667]: I0131 04:04:21.062700 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rm9zq" event={"ID":"e05e4eb8-23a2-4867-af5d-ad1fe7502683","Type":"ContainerDied","Data":"65a93a832f11842b00e19ff5219ddd7313e7458eeef1dadd61eae32b8204f471"} Jan 31 04:04:21 crc kubenswrapper[4667]: I0131 04:04:21.062733 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rm9zq" event={"ID":"e05e4eb8-23a2-4867-af5d-ad1fe7502683","Type":"ContainerDied","Data":"741eeb114690e193d8687043f8d7534c62ca1dec00a7c311fcbd1e85cd3ae06e"} Jan 31 04:04:21 crc kubenswrapper[4667]: I0131 04:04:21.062754 4667 scope.go:117] "RemoveContainer" containerID="65a93a832f11842b00e19ff5219ddd7313e7458eeef1dadd61eae32b8204f471" Jan 31 04:04:21 crc kubenswrapper[4667]: I0131 04:04:21.062879 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rm9zq" Jan 31 04:04:21 crc kubenswrapper[4667]: I0131 04:04:21.100894 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rm9zq"] Jan 31 04:04:21 crc kubenswrapper[4667]: I0131 04:04:21.101637 4667 scope.go:117] "RemoveContainer" containerID="99c17f4e21e2850b37d34b5caf362423742aa5ceb0be449f40e815b35b2cce5e" Jan 31 04:04:21 crc kubenswrapper[4667]: I0131 04:04:21.107740 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rm9zq"] Jan 31 04:04:21 crc kubenswrapper[4667]: I0131 04:04:21.119320 4667 scope.go:117] "RemoveContainer" containerID="733e5e3d6ec63b1c35f472157ea7b95ea252d0d525dca4b2dc389c81d5f803b0" Jan 31 04:04:21 crc kubenswrapper[4667]: I0131 04:04:21.141618 4667 scope.go:117] "RemoveContainer" containerID="65a93a832f11842b00e19ff5219ddd7313e7458eeef1dadd61eae32b8204f471" Jan 31 04:04:21 crc kubenswrapper[4667]: E0131 04:04:21.142139 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65a93a832f11842b00e19ff5219ddd7313e7458eeef1dadd61eae32b8204f471\": container with ID starting with 65a93a832f11842b00e19ff5219ddd7313e7458eeef1dadd61eae32b8204f471 not found: ID does not exist" containerID="65a93a832f11842b00e19ff5219ddd7313e7458eeef1dadd61eae32b8204f471" Jan 31 04:04:21 crc kubenswrapper[4667]: I0131 04:04:21.142174 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65a93a832f11842b00e19ff5219ddd7313e7458eeef1dadd61eae32b8204f471"} err="failed to get container status \"65a93a832f11842b00e19ff5219ddd7313e7458eeef1dadd61eae32b8204f471\": rpc error: code = NotFound desc = could not find container \"65a93a832f11842b00e19ff5219ddd7313e7458eeef1dadd61eae32b8204f471\": container with ID starting with 65a93a832f11842b00e19ff5219ddd7313e7458eeef1dadd61eae32b8204f471 not found: ID does not exist" Jan 31 04:04:21 crc kubenswrapper[4667]: I0131 04:04:21.142196 4667 scope.go:117] "RemoveContainer" containerID="99c17f4e21e2850b37d34b5caf362423742aa5ceb0be449f40e815b35b2cce5e" Jan 31 04:04:21 crc kubenswrapper[4667]: E0131 04:04:21.142440 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99c17f4e21e2850b37d34b5caf362423742aa5ceb0be449f40e815b35b2cce5e\": container with ID starting with 99c17f4e21e2850b37d34b5caf362423742aa5ceb0be449f40e815b35b2cce5e not found: ID does not exist" containerID="99c17f4e21e2850b37d34b5caf362423742aa5ceb0be449f40e815b35b2cce5e" Jan 31 04:04:21 crc kubenswrapper[4667]: I0131 04:04:21.142463 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99c17f4e21e2850b37d34b5caf362423742aa5ceb0be449f40e815b35b2cce5e"} err="failed to get container status \"99c17f4e21e2850b37d34b5caf362423742aa5ceb0be449f40e815b35b2cce5e\": rpc error: code = NotFound desc = could not find container \"99c17f4e21e2850b37d34b5caf362423742aa5ceb0be449f40e815b35b2cce5e\": container with ID starting with 99c17f4e21e2850b37d34b5caf362423742aa5ceb0be449f40e815b35b2cce5e not found: ID does not exist" Jan 31 04:04:21 crc kubenswrapper[4667]: I0131 04:04:21.142477 4667 scope.go:117] "RemoveContainer" containerID="733e5e3d6ec63b1c35f472157ea7b95ea252d0d525dca4b2dc389c81d5f803b0" Jan 31 04:04:21 crc kubenswrapper[4667]: E0131 04:04:21.143061 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"733e5e3d6ec63b1c35f472157ea7b95ea252d0d525dca4b2dc389c81d5f803b0\": container with ID starting with 733e5e3d6ec63b1c35f472157ea7b95ea252d0d525dca4b2dc389c81d5f803b0 not found: ID does not exist" containerID="733e5e3d6ec63b1c35f472157ea7b95ea252d0d525dca4b2dc389c81d5f803b0" Jan 31 04:04:21 crc kubenswrapper[4667]: I0131 04:04:21.143344 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"733e5e3d6ec63b1c35f472157ea7b95ea252d0d525dca4b2dc389c81d5f803b0"} err="failed to get container status \"733e5e3d6ec63b1c35f472157ea7b95ea252d0d525dca4b2dc389c81d5f803b0\": rpc error: code = NotFound desc = could not find container \"733e5e3d6ec63b1c35f472157ea7b95ea252d0d525dca4b2dc389c81d5f803b0\": container with ID starting with 733e5e3d6ec63b1c35f472157ea7b95ea252d0d525dca4b2dc389c81d5f803b0 not found: ID does not exist" Jan 31 04:04:21 crc kubenswrapper[4667]: I0131 04:04:21.296638 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e05e4eb8-23a2-4867-af5d-ad1fe7502683" path="/var/lib/kubelet/pods/e05e4eb8-23a2-4867-af5d-ad1fe7502683/volumes" Jan 31 04:04:36 crc kubenswrapper[4667]: I0131 04:04:36.673181 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8bs85"] Jan 31 04:04:36 crc kubenswrapper[4667]: E0131 04:04:36.673989 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b24b83c-995f-4a6f-a567-0ce5c6cbd210" containerName="extract-utilities" Jan 31 04:04:36 crc kubenswrapper[4667]: I0131 04:04:36.674003 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b24b83c-995f-4a6f-a567-0ce5c6cbd210" containerName="extract-utilities" Jan 31 04:04:36 crc kubenswrapper[4667]: E0131 04:04:36.674017 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f05ec05-e1e9-4a75-8971-9c7a716e6a6d" containerName="registry-server" Jan 31 04:04:36 crc kubenswrapper[4667]: I0131 04:04:36.674023 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f05ec05-e1e9-4a75-8971-9c7a716e6a6d" containerName="registry-server" Jan 31 04:04:36 crc kubenswrapper[4667]: E0131 04:04:36.674036 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e05e4eb8-23a2-4867-af5d-ad1fe7502683" containerName="extract-content" Jan 31 04:04:36 crc kubenswrapper[4667]: I0131 04:04:36.674042 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="e05e4eb8-23a2-4867-af5d-ad1fe7502683" containerName="extract-content" Jan 31 04:04:36 crc kubenswrapper[4667]: E0131 04:04:36.674052 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e05e4eb8-23a2-4867-af5d-ad1fe7502683" containerName="extract-utilities" Jan 31 04:04:36 crc kubenswrapper[4667]: I0131 04:04:36.674058 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="e05e4eb8-23a2-4867-af5d-ad1fe7502683" containerName="extract-utilities" Jan 31 04:04:36 crc kubenswrapper[4667]: E0131 04:04:36.674066 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f05ec05-e1e9-4a75-8971-9c7a716e6a6d" containerName="extract-utilities" Jan 31 04:04:36 crc kubenswrapper[4667]: I0131 04:04:36.674071 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f05ec05-e1e9-4a75-8971-9c7a716e6a6d" containerName="extract-utilities" Jan 31 04:04:36 crc kubenswrapper[4667]: E0131 04:04:36.674080 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f05ec05-e1e9-4a75-8971-9c7a716e6a6d" containerName="extract-content" Jan 31 04:04:36 crc kubenswrapper[4667]: I0131 04:04:36.674085 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f05ec05-e1e9-4a75-8971-9c7a716e6a6d" containerName="extract-content" Jan 31 04:04:36 crc kubenswrapper[4667]: E0131 04:04:36.674093 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e05e4eb8-23a2-4867-af5d-ad1fe7502683" containerName="registry-server" Jan 31 04:04:36 crc kubenswrapper[4667]: I0131 04:04:36.674099 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="e05e4eb8-23a2-4867-af5d-ad1fe7502683" containerName="registry-server" Jan 31 04:04:36 crc kubenswrapper[4667]: E0131 04:04:36.674110 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b24b83c-995f-4a6f-a567-0ce5c6cbd210" containerName="extract-content" Jan 31 04:04:36 crc kubenswrapper[4667]: I0131 04:04:36.674116 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b24b83c-995f-4a6f-a567-0ce5c6cbd210" containerName="extract-content" Jan 31 04:04:36 crc kubenswrapper[4667]: E0131 04:04:36.674126 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b24b83c-995f-4a6f-a567-0ce5c6cbd210" containerName="registry-server" Jan 31 04:04:36 crc kubenswrapper[4667]: I0131 04:04:36.674131 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b24b83c-995f-4a6f-a567-0ce5c6cbd210" containerName="registry-server" Jan 31 04:04:36 crc kubenswrapper[4667]: I0131 04:04:36.674252 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b24b83c-995f-4a6f-a567-0ce5c6cbd210" containerName="registry-server" Jan 31 04:04:36 crc kubenswrapper[4667]: I0131 04:04:36.674265 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f05ec05-e1e9-4a75-8971-9c7a716e6a6d" containerName="registry-server" Jan 31 04:04:36 crc kubenswrapper[4667]: I0131 04:04:36.674275 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="e05e4eb8-23a2-4867-af5d-ad1fe7502683" containerName="registry-server" Jan 31 04:04:36 crc kubenswrapper[4667]: I0131 04:04:36.675075 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8bs85" Jan 31 04:04:36 crc kubenswrapper[4667]: I0131 04:04:36.682326 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 31 04:04:36 crc kubenswrapper[4667]: I0131 04:04:36.682546 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 31 04:04:36 crc kubenswrapper[4667]: I0131 04:04:36.684756 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-hvxnr" Jan 31 04:04:36 crc kubenswrapper[4667]: I0131 04:04:36.685058 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 31 04:04:36 crc kubenswrapper[4667]: I0131 04:04:36.689200 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfp6f\" (UniqueName: \"kubernetes.io/projected/c9fb21e9-25ef-48b0-99cb-67d39aa677d1-kube-api-access-sfp6f\") pod \"dnsmasq-dns-675f4bcbfc-8bs85\" (UID: \"c9fb21e9-25ef-48b0-99cb-67d39aa677d1\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8bs85" Jan 31 04:04:36 crc kubenswrapper[4667]: I0131 04:04:36.689310 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9fb21e9-25ef-48b0-99cb-67d39aa677d1-config\") pod \"dnsmasq-dns-675f4bcbfc-8bs85\" (UID: \"c9fb21e9-25ef-48b0-99cb-67d39aa677d1\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8bs85" Jan 31 04:04:36 crc kubenswrapper[4667]: I0131 04:04:36.700222 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8bs85"] Jan 31 04:04:36 crc kubenswrapper[4667]: I0131 04:04:36.790651 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfp6f\" (UniqueName: \"kubernetes.io/projected/c9fb21e9-25ef-48b0-99cb-67d39aa677d1-kube-api-access-sfp6f\") pod \"dnsmasq-dns-675f4bcbfc-8bs85\" (UID: \"c9fb21e9-25ef-48b0-99cb-67d39aa677d1\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8bs85" Jan 31 04:04:36 crc kubenswrapper[4667]: I0131 04:04:36.790747 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9fb21e9-25ef-48b0-99cb-67d39aa677d1-config\") pod \"dnsmasq-dns-675f4bcbfc-8bs85\" (UID: \"c9fb21e9-25ef-48b0-99cb-67d39aa677d1\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8bs85" Jan 31 04:04:36 crc kubenswrapper[4667]: I0131 04:04:36.791931 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9fb21e9-25ef-48b0-99cb-67d39aa677d1-config\") pod \"dnsmasq-dns-675f4bcbfc-8bs85\" (UID: \"c9fb21e9-25ef-48b0-99cb-67d39aa677d1\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8bs85" Jan 31 04:04:36 crc kubenswrapper[4667]: I0131 04:04:36.829134 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vkbvv"] Jan 31 04:04:36 crc kubenswrapper[4667]: I0131 04:04:36.830768 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-vkbvv" Jan 31 04:04:36 crc kubenswrapper[4667]: I0131 04:04:36.834351 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 31 04:04:36 crc kubenswrapper[4667]: I0131 04:04:36.835611 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfp6f\" (UniqueName: \"kubernetes.io/projected/c9fb21e9-25ef-48b0-99cb-67d39aa677d1-kube-api-access-sfp6f\") pod \"dnsmasq-dns-675f4bcbfc-8bs85\" (UID: \"c9fb21e9-25ef-48b0-99cb-67d39aa677d1\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8bs85" Jan 31 04:04:36 crc kubenswrapper[4667]: I0131 04:04:36.851192 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vkbvv"] Jan 31 04:04:36 crc kubenswrapper[4667]: I0131 04:04:36.892463 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e167032a-ddb6-4f07-8a1e-9f135c8d73a3-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-vkbvv\" (UID: \"e167032a-ddb6-4f07-8a1e-9f135c8d73a3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vkbvv" Jan 31 04:04:36 crc kubenswrapper[4667]: I0131 04:04:36.892560 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hc4m\" (UniqueName: \"kubernetes.io/projected/e167032a-ddb6-4f07-8a1e-9f135c8d73a3-kube-api-access-5hc4m\") pod \"dnsmasq-dns-78dd6ddcc-vkbvv\" (UID: \"e167032a-ddb6-4f07-8a1e-9f135c8d73a3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vkbvv" Jan 31 04:04:36 crc kubenswrapper[4667]: I0131 04:04:36.892593 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e167032a-ddb6-4f07-8a1e-9f135c8d73a3-config\") pod \"dnsmasq-dns-78dd6ddcc-vkbvv\" (UID: \"e167032a-ddb6-4f07-8a1e-9f135c8d73a3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vkbvv" Jan 31 04:04:36 crc kubenswrapper[4667]: I0131 04:04:36.994084 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hc4m\" (UniqueName: \"kubernetes.io/projected/e167032a-ddb6-4f07-8a1e-9f135c8d73a3-kube-api-access-5hc4m\") pod \"dnsmasq-dns-78dd6ddcc-vkbvv\" (UID: \"e167032a-ddb6-4f07-8a1e-9f135c8d73a3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vkbvv" Jan 31 04:04:36 crc kubenswrapper[4667]: I0131 04:04:36.994384 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e167032a-ddb6-4f07-8a1e-9f135c8d73a3-config\") pod \"dnsmasq-dns-78dd6ddcc-vkbvv\" (UID: \"e167032a-ddb6-4f07-8a1e-9f135c8d73a3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vkbvv" Jan 31 04:04:36 crc kubenswrapper[4667]: I0131 04:04:36.994512 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e167032a-ddb6-4f07-8a1e-9f135c8d73a3-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-vkbvv\" (UID: \"e167032a-ddb6-4f07-8a1e-9f135c8d73a3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vkbvv" Jan 31 04:04:36 crc kubenswrapper[4667]: I0131 04:04:36.995293 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e167032a-ddb6-4f07-8a1e-9f135c8d73a3-config\") pod \"dnsmasq-dns-78dd6ddcc-vkbvv\" (UID: \"e167032a-ddb6-4f07-8a1e-9f135c8d73a3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vkbvv" Jan 31 04:04:36 crc kubenswrapper[4667]: I0131 04:04:36.995346 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e167032a-ddb6-4f07-8a1e-9f135c8d73a3-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-vkbvv\" (UID: \"e167032a-ddb6-4f07-8a1e-9f135c8d73a3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vkbvv" Jan 31 04:04:37 crc kubenswrapper[4667]: I0131 04:04:37.000617 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8bs85" Jan 31 04:04:37 crc kubenswrapper[4667]: I0131 04:04:37.020025 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hc4m\" (UniqueName: \"kubernetes.io/projected/e167032a-ddb6-4f07-8a1e-9f135c8d73a3-kube-api-access-5hc4m\") pod \"dnsmasq-dns-78dd6ddcc-vkbvv\" (UID: \"e167032a-ddb6-4f07-8a1e-9f135c8d73a3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vkbvv" Jan 31 04:04:37 crc kubenswrapper[4667]: I0131 04:04:37.160216 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-vkbvv" Jan 31 04:04:37 crc kubenswrapper[4667]: I0131 04:04:37.497557 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8bs85"] Jan 31 04:04:37 crc kubenswrapper[4667]: I0131 04:04:37.685734 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vkbvv"] Jan 31 04:04:37 crc kubenswrapper[4667]: W0131 04:04:37.696804 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode167032a_ddb6_4f07_8a1e_9f135c8d73a3.slice/crio-a8c1605c00c8858eaed5009d6271740ac91f66798b2f756a6fa54d019c62ea82 WatchSource:0}: Error finding container a8c1605c00c8858eaed5009d6271740ac91f66798b2f756a6fa54d019c62ea82: Status 404 returned error can't find the container with id a8c1605c00c8858eaed5009d6271740ac91f66798b2f756a6fa54d019c62ea82 Jan 31 04:04:38 crc kubenswrapper[4667]: I0131 04:04:38.275052 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-vkbvv" event={"ID":"e167032a-ddb6-4f07-8a1e-9f135c8d73a3","Type":"ContainerStarted","Data":"a8c1605c00c8858eaed5009d6271740ac91f66798b2f756a6fa54d019c62ea82"} Jan 31 04:04:38 crc kubenswrapper[4667]: I0131 04:04:38.284946 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-8bs85" event={"ID":"c9fb21e9-25ef-48b0-99cb-67d39aa677d1","Type":"ContainerStarted","Data":"b578f96fd03a99700ae851b98d388e7fcb2e8607109fd627568dc7f856765558"} Jan 31 04:04:39 crc kubenswrapper[4667]: I0131 04:04:39.538109 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8bs85"] Jan 31 04:04:39 crc kubenswrapper[4667]: I0131 04:04:39.580424 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8xpd4"] Jan 31 04:04:39 crc kubenswrapper[4667]: I0131 04:04:39.581965 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-8xpd4" Jan 31 04:04:39 crc kubenswrapper[4667]: I0131 04:04:39.605833 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8xpd4"] Jan 31 04:04:39 crc kubenswrapper[4667]: I0131 04:04:39.655485 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dfb8d60-e646-49ab-8886-d751855667aa-config\") pod \"dnsmasq-dns-666b6646f7-8xpd4\" (UID: \"6dfb8d60-e646-49ab-8886-d751855667aa\") " pod="openstack/dnsmasq-dns-666b6646f7-8xpd4" Jan 31 04:04:39 crc kubenswrapper[4667]: I0131 04:04:39.655568 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6dfb8d60-e646-49ab-8886-d751855667aa-dns-svc\") pod \"dnsmasq-dns-666b6646f7-8xpd4\" (UID: \"6dfb8d60-e646-49ab-8886-d751855667aa\") " pod="openstack/dnsmasq-dns-666b6646f7-8xpd4" Jan 31 04:04:39 crc kubenswrapper[4667]: I0131 04:04:39.655613 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sc7p\" (UniqueName: \"kubernetes.io/projected/6dfb8d60-e646-49ab-8886-d751855667aa-kube-api-access-2sc7p\") pod \"dnsmasq-dns-666b6646f7-8xpd4\" (UID: \"6dfb8d60-e646-49ab-8886-d751855667aa\") " pod="openstack/dnsmasq-dns-666b6646f7-8xpd4" Jan 31 04:04:39 crc kubenswrapper[4667]: I0131 04:04:39.757032 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6dfb8d60-e646-49ab-8886-d751855667aa-dns-svc\") pod \"dnsmasq-dns-666b6646f7-8xpd4\" (UID: \"6dfb8d60-e646-49ab-8886-d751855667aa\") " pod="openstack/dnsmasq-dns-666b6646f7-8xpd4" Jan 31 04:04:39 crc kubenswrapper[4667]: I0131 04:04:39.757129 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sc7p\" (UniqueName: \"kubernetes.io/projected/6dfb8d60-e646-49ab-8886-d751855667aa-kube-api-access-2sc7p\") pod \"dnsmasq-dns-666b6646f7-8xpd4\" (UID: \"6dfb8d60-e646-49ab-8886-d751855667aa\") " pod="openstack/dnsmasq-dns-666b6646f7-8xpd4" Jan 31 04:04:39 crc kubenswrapper[4667]: I0131 04:04:39.757182 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dfb8d60-e646-49ab-8886-d751855667aa-config\") pod \"dnsmasq-dns-666b6646f7-8xpd4\" (UID: \"6dfb8d60-e646-49ab-8886-d751855667aa\") " pod="openstack/dnsmasq-dns-666b6646f7-8xpd4" Jan 31 04:04:39 crc kubenswrapper[4667]: I0131 04:04:39.757968 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6dfb8d60-e646-49ab-8886-d751855667aa-dns-svc\") pod \"dnsmasq-dns-666b6646f7-8xpd4\" (UID: \"6dfb8d60-e646-49ab-8886-d751855667aa\") " pod="openstack/dnsmasq-dns-666b6646f7-8xpd4" Jan 31 04:04:39 crc kubenswrapper[4667]: I0131 04:04:39.767039 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dfb8d60-e646-49ab-8886-d751855667aa-config\") pod \"dnsmasq-dns-666b6646f7-8xpd4\" (UID: \"6dfb8d60-e646-49ab-8886-d751855667aa\") " pod="openstack/dnsmasq-dns-666b6646f7-8xpd4" Jan 31 04:04:39 crc kubenswrapper[4667]: I0131 04:04:39.786254 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sc7p\" (UniqueName: \"kubernetes.io/projected/6dfb8d60-e646-49ab-8886-d751855667aa-kube-api-access-2sc7p\") pod \"dnsmasq-dns-666b6646f7-8xpd4\" (UID: \"6dfb8d60-e646-49ab-8886-d751855667aa\") " pod="openstack/dnsmasq-dns-666b6646f7-8xpd4" Jan 31 04:04:39 crc kubenswrapper[4667]: I0131 04:04:39.904655 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-8xpd4" Jan 31 04:04:39 crc kubenswrapper[4667]: I0131 04:04:39.983164 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vkbvv"] Jan 31 04:04:40 crc kubenswrapper[4667]: I0131 04:04:40.030205 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-zcgws"] Jan 31 04:04:40 crc kubenswrapper[4667]: I0131 04:04:40.038576 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-zcgws" Jan 31 04:04:40 crc kubenswrapper[4667]: I0131 04:04:40.045371 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-zcgws"] Jan 31 04:04:40 crc kubenswrapper[4667]: I0131 04:04:40.068807 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d332fef-7a81-4aca-b797-2b3d526c50c9-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-zcgws\" (UID: \"5d332fef-7a81-4aca-b797-2b3d526c50c9\") " pod="openstack/dnsmasq-dns-57d769cc4f-zcgws" Jan 31 04:04:40 crc kubenswrapper[4667]: I0131 04:04:40.068901 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d332fef-7a81-4aca-b797-2b3d526c50c9-config\") pod \"dnsmasq-dns-57d769cc4f-zcgws\" (UID: \"5d332fef-7a81-4aca-b797-2b3d526c50c9\") " pod="openstack/dnsmasq-dns-57d769cc4f-zcgws" Jan 31 04:04:40 crc kubenswrapper[4667]: I0131 04:04:40.068929 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svfh8\" (UniqueName: \"kubernetes.io/projected/5d332fef-7a81-4aca-b797-2b3d526c50c9-kube-api-access-svfh8\") pod \"dnsmasq-dns-57d769cc4f-zcgws\" (UID: \"5d332fef-7a81-4aca-b797-2b3d526c50c9\") " pod="openstack/dnsmasq-dns-57d769cc4f-zcgws" Jan 31 04:04:40 crc kubenswrapper[4667]: I0131 04:04:40.173364 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d332fef-7a81-4aca-b797-2b3d526c50c9-config\") pod \"dnsmasq-dns-57d769cc4f-zcgws\" (UID: \"5d332fef-7a81-4aca-b797-2b3d526c50c9\") " pod="openstack/dnsmasq-dns-57d769cc4f-zcgws" Jan 31 04:04:40 crc kubenswrapper[4667]: I0131 04:04:40.173444 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svfh8\" (UniqueName: \"kubernetes.io/projected/5d332fef-7a81-4aca-b797-2b3d526c50c9-kube-api-access-svfh8\") pod \"dnsmasq-dns-57d769cc4f-zcgws\" (UID: \"5d332fef-7a81-4aca-b797-2b3d526c50c9\") " pod="openstack/dnsmasq-dns-57d769cc4f-zcgws" Jan 31 04:04:40 crc kubenswrapper[4667]: I0131 04:04:40.173512 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d332fef-7a81-4aca-b797-2b3d526c50c9-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-zcgws\" (UID: \"5d332fef-7a81-4aca-b797-2b3d526c50c9\") " pod="openstack/dnsmasq-dns-57d769cc4f-zcgws" Jan 31 04:04:40 crc kubenswrapper[4667]: I0131 04:04:40.175295 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d332fef-7a81-4aca-b797-2b3d526c50c9-config\") pod \"dnsmasq-dns-57d769cc4f-zcgws\" (UID: \"5d332fef-7a81-4aca-b797-2b3d526c50c9\") " pod="openstack/dnsmasq-dns-57d769cc4f-zcgws" Jan 31 04:04:40 crc kubenswrapper[4667]: I0131 04:04:40.176252 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d332fef-7a81-4aca-b797-2b3d526c50c9-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-zcgws\" (UID: \"5d332fef-7a81-4aca-b797-2b3d526c50c9\") " pod="openstack/dnsmasq-dns-57d769cc4f-zcgws" Jan 31 04:04:40 crc kubenswrapper[4667]: I0131 04:04:40.247183 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svfh8\" (UniqueName: \"kubernetes.io/projected/5d332fef-7a81-4aca-b797-2b3d526c50c9-kube-api-access-svfh8\") pod \"dnsmasq-dns-57d769cc4f-zcgws\" (UID: \"5d332fef-7a81-4aca-b797-2b3d526c50c9\") " pod="openstack/dnsmasq-dns-57d769cc4f-zcgws" Jan 31 04:04:40 crc kubenswrapper[4667]: I0131 04:04:40.379765 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-zcgws" Jan 31 04:04:40 crc kubenswrapper[4667]: I0131 04:04:40.663378 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8xpd4"] Jan 31 04:04:40 crc kubenswrapper[4667]: I0131 04:04:40.775505 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 04:04:40 crc kubenswrapper[4667]: I0131 04:04:40.808720 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 04:04:40 crc kubenswrapper[4667]: I0131 04:04:40.818192 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 31 04:04:40 crc kubenswrapper[4667]: I0131 04:04:40.818525 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 31 04:04:40 crc kubenswrapper[4667]: I0131 04:04:40.818760 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-8hd55" Jan 31 04:04:40 crc kubenswrapper[4667]: I0131 04:04:40.818899 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 31 04:04:40 crc kubenswrapper[4667]: I0131 04:04:40.834190 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 31 04:04:40 crc kubenswrapper[4667]: I0131 04:04:40.835327 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 31 04:04:40 crc kubenswrapper[4667]: I0131 04:04:40.835761 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 31 04:04:40 crc kubenswrapper[4667]: I0131 04:04:40.853993 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 04:04:40 crc kubenswrapper[4667]: I0131 04:04:40.987705 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-zcgws"] Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.010518 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bf3f1a21-51b1-4282-99e5-ab52084984c0-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bf3f1a21-51b1-4282-99e5-ab52084984c0\") " pod="openstack/rabbitmq-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.010562 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"bf3f1a21-51b1-4282-99e5-ab52084984c0\") " pod="openstack/rabbitmq-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.010621 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bf3f1a21-51b1-4282-99e5-ab52084984c0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bf3f1a21-51b1-4282-99e5-ab52084984c0\") " pod="openstack/rabbitmq-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.010638 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bf3f1a21-51b1-4282-99e5-ab52084984c0-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bf3f1a21-51b1-4282-99e5-ab52084984c0\") " pod="openstack/rabbitmq-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.010653 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48xpm\" (UniqueName: \"kubernetes.io/projected/bf3f1a21-51b1-4282-99e5-ab52084984c0-kube-api-access-48xpm\") pod \"rabbitmq-server-0\" (UID: \"bf3f1a21-51b1-4282-99e5-ab52084984c0\") " pod="openstack/rabbitmq-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.010670 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bf3f1a21-51b1-4282-99e5-ab52084984c0-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bf3f1a21-51b1-4282-99e5-ab52084984c0\") " pod="openstack/rabbitmq-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.010695 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bf3f1a21-51b1-4282-99e5-ab52084984c0-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bf3f1a21-51b1-4282-99e5-ab52084984c0\") " pod="openstack/rabbitmq-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.010709 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bf3f1a21-51b1-4282-99e5-ab52084984c0-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bf3f1a21-51b1-4282-99e5-ab52084984c0\") " pod="openstack/rabbitmq-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.010728 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bf3f1a21-51b1-4282-99e5-ab52084984c0-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bf3f1a21-51b1-4282-99e5-ab52084984c0\") " pod="openstack/rabbitmq-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.010748 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bf3f1a21-51b1-4282-99e5-ab52084984c0-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bf3f1a21-51b1-4282-99e5-ab52084984c0\") " pod="openstack/rabbitmq-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.010781 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bf3f1a21-51b1-4282-99e5-ab52084984c0-config-data\") pod \"rabbitmq-server-0\" (UID: \"bf3f1a21-51b1-4282-99e5-ab52084984c0\") " pod="openstack/rabbitmq-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.113109 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bf3f1a21-51b1-4282-99e5-ab52084984c0-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bf3f1a21-51b1-4282-99e5-ab52084984c0\") " pod="openstack/rabbitmq-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.113197 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"bf3f1a21-51b1-4282-99e5-ab52084984c0\") " pod="openstack/rabbitmq-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.113273 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bf3f1a21-51b1-4282-99e5-ab52084984c0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bf3f1a21-51b1-4282-99e5-ab52084984c0\") " pod="openstack/rabbitmq-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.113294 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bf3f1a21-51b1-4282-99e5-ab52084984c0-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bf3f1a21-51b1-4282-99e5-ab52084984c0\") " pod="openstack/rabbitmq-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.113320 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48xpm\" (UniqueName: \"kubernetes.io/projected/bf3f1a21-51b1-4282-99e5-ab52084984c0-kube-api-access-48xpm\") pod \"rabbitmq-server-0\" (UID: \"bf3f1a21-51b1-4282-99e5-ab52084984c0\") " pod="openstack/rabbitmq-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.113341 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bf3f1a21-51b1-4282-99e5-ab52084984c0-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bf3f1a21-51b1-4282-99e5-ab52084984c0\") " pod="openstack/rabbitmq-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.113378 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bf3f1a21-51b1-4282-99e5-ab52084984c0-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bf3f1a21-51b1-4282-99e5-ab52084984c0\") " pod="openstack/rabbitmq-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.113420 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bf3f1a21-51b1-4282-99e5-ab52084984c0-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bf3f1a21-51b1-4282-99e5-ab52084984c0\") " pod="openstack/rabbitmq-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.113447 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bf3f1a21-51b1-4282-99e5-ab52084984c0-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bf3f1a21-51b1-4282-99e5-ab52084984c0\") " pod="openstack/rabbitmq-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.113469 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bf3f1a21-51b1-4282-99e5-ab52084984c0-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bf3f1a21-51b1-4282-99e5-ab52084984c0\") " pod="openstack/rabbitmq-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.113508 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bf3f1a21-51b1-4282-99e5-ab52084984c0-config-data\") pod \"rabbitmq-server-0\" (UID: \"bf3f1a21-51b1-4282-99e5-ab52084984c0\") " pod="openstack/rabbitmq-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.113667 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bf3f1a21-51b1-4282-99e5-ab52084984c0-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bf3f1a21-51b1-4282-99e5-ab52084984c0\") " pod="openstack/rabbitmq-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.113666 4667 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"bf3f1a21-51b1-4282-99e5-ab52084984c0\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.114093 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bf3f1a21-51b1-4282-99e5-ab52084984c0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bf3f1a21-51b1-4282-99e5-ab52084984c0\") " pod="openstack/rabbitmq-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.114821 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bf3f1a21-51b1-4282-99e5-ab52084984c0-config-data\") pod \"rabbitmq-server-0\" (UID: \"bf3f1a21-51b1-4282-99e5-ab52084984c0\") " pod="openstack/rabbitmq-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.116092 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bf3f1a21-51b1-4282-99e5-ab52084984c0-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bf3f1a21-51b1-4282-99e5-ab52084984c0\") " pod="openstack/rabbitmq-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.116452 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bf3f1a21-51b1-4282-99e5-ab52084984c0-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bf3f1a21-51b1-4282-99e5-ab52084984c0\") " pod="openstack/rabbitmq-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.121358 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bf3f1a21-51b1-4282-99e5-ab52084984c0-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bf3f1a21-51b1-4282-99e5-ab52084984c0\") " pod="openstack/rabbitmq-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.134074 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bf3f1a21-51b1-4282-99e5-ab52084984c0-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bf3f1a21-51b1-4282-99e5-ab52084984c0\") " pod="openstack/rabbitmq-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.135215 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bf3f1a21-51b1-4282-99e5-ab52084984c0-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bf3f1a21-51b1-4282-99e5-ab52084984c0\") " pod="openstack/rabbitmq-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.149600 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bf3f1a21-51b1-4282-99e5-ab52084984c0-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bf3f1a21-51b1-4282-99e5-ab52084984c0\") " pod="openstack/rabbitmq-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.158589 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48xpm\" (UniqueName: \"kubernetes.io/projected/bf3f1a21-51b1-4282-99e5-ab52084984c0-kube-api-access-48xpm\") pod \"rabbitmq-server-0\" (UID: \"bf3f1a21-51b1-4282-99e5-ab52084984c0\") " pod="openstack/rabbitmq-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.173257 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"bf3f1a21-51b1-4282-99e5-ab52084984c0\") " pod="openstack/rabbitmq-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.180720 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.182783 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.193758 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.194233 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-p77j2" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.194384 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.194516 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.194583 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.194536 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.199037 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.318293 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.319035 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9265013e-d7ee-49cf-a5d8-c2f80066f459-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9265013e-d7ee-49cf-a5d8-c2f80066f459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.319086 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9265013e-d7ee-49cf-a5d8-c2f80066f459-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9265013e-d7ee-49cf-a5d8-c2f80066f459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.319117 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8wcq\" (UniqueName: \"kubernetes.io/projected/9265013e-d7ee-49cf-a5d8-c2f80066f459-kube-api-access-q8wcq\") pod \"rabbitmq-cell1-server-0\" (UID: \"9265013e-d7ee-49cf-a5d8-c2f80066f459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.319166 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9265013e-d7ee-49cf-a5d8-c2f80066f459-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9265013e-d7ee-49cf-a5d8-c2f80066f459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.319460 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9265013e-d7ee-49cf-a5d8-c2f80066f459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.319541 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9265013e-d7ee-49cf-a5d8-c2f80066f459-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9265013e-d7ee-49cf-a5d8-c2f80066f459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.319596 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9265013e-d7ee-49cf-a5d8-c2f80066f459-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9265013e-d7ee-49cf-a5d8-c2f80066f459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.319654 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9265013e-d7ee-49cf-a5d8-c2f80066f459-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9265013e-d7ee-49cf-a5d8-c2f80066f459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.319686 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9265013e-d7ee-49cf-a5d8-c2f80066f459-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9265013e-d7ee-49cf-a5d8-c2f80066f459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.319741 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9265013e-d7ee-49cf-a5d8-c2f80066f459-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9265013e-d7ee-49cf-a5d8-c2f80066f459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.319773 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9265013e-d7ee-49cf-a5d8-c2f80066f459-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9265013e-d7ee-49cf-a5d8-c2f80066f459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.347799 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-8xpd4" event={"ID":"6dfb8d60-e646-49ab-8886-d751855667aa","Type":"ContainerStarted","Data":"8d0d90f361955f2dae2138f7b408c9aa03fc4ac541ddb421ffc5ad6ab9f3736d"} Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.350152 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-zcgws" event={"ID":"5d332fef-7a81-4aca-b797-2b3d526c50c9","Type":"ContainerStarted","Data":"1d40557eb04f7c5018e3301e52b40885bc68ac6f710ac9ca292b5aa0e5b0c25c"} Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.421444 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9265013e-d7ee-49cf-a5d8-c2f80066f459-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9265013e-d7ee-49cf-a5d8-c2f80066f459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.421537 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9265013e-d7ee-49cf-a5d8-c2f80066f459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.421565 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9265013e-d7ee-49cf-a5d8-c2f80066f459-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9265013e-d7ee-49cf-a5d8-c2f80066f459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.421590 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9265013e-d7ee-49cf-a5d8-c2f80066f459-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9265013e-d7ee-49cf-a5d8-c2f80066f459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.421615 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9265013e-d7ee-49cf-a5d8-c2f80066f459-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9265013e-d7ee-49cf-a5d8-c2f80066f459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.421635 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9265013e-d7ee-49cf-a5d8-c2f80066f459-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9265013e-d7ee-49cf-a5d8-c2f80066f459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.421659 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9265013e-d7ee-49cf-a5d8-c2f80066f459-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9265013e-d7ee-49cf-a5d8-c2f80066f459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.421680 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9265013e-d7ee-49cf-a5d8-c2f80066f459-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9265013e-d7ee-49cf-a5d8-c2f80066f459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.421711 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9265013e-d7ee-49cf-a5d8-c2f80066f459-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9265013e-d7ee-49cf-a5d8-c2f80066f459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.421737 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9265013e-d7ee-49cf-a5d8-c2f80066f459-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9265013e-d7ee-49cf-a5d8-c2f80066f459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.421762 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8wcq\" (UniqueName: \"kubernetes.io/projected/9265013e-d7ee-49cf-a5d8-c2f80066f459-kube-api-access-q8wcq\") pod \"rabbitmq-cell1-server-0\" (UID: \"9265013e-d7ee-49cf-a5d8-c2f80066f459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.422345 4667 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9265013e-d7ee-49cf-a5d8-c2f80066f459\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.423367 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9265013e-d7ee-49cf-a5d8-c2f80066f459-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9265013e-d7ee-49cf-a5d8-c2f80066f459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.423643 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9265013e-d7ee-49cf-a5d8-c2f80066f459-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9265013e-d7ee-49cf-a5d8-c2f80066f459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.423784 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9265013e-d7ee-49cf-a5d8-c2f80066f459-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9265013e-d7ee-49cf-a5d8-c2f80066f459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.424362 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9265013e-d7ee-49cf-a5d8-c2f80066f459-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9265013e-d7ee-49cf-a5d8-c2f80066f459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.424902 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9265013e-d7ee-49cf-a5d8-c2f80066f459-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9265013e-d7ee-49cf-a5d8-c2f80066f459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.430817 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9265013e-d7ee-49cf-a5d8-c2f80066f459-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9265013e-d7ee-49cf-a5d8-c2f80066f459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.432428 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9265013e-d7ee-49cf-a5d8-c2f80066f459-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9265013e-d7ee-49cf-a5d8-c2f80066f459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.449122 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9265013e-d7ee-49cf-a5d8-c2f80066f459-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9265013e-d7ee-49cf-a5d8-c2f80066f459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.457270 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9265013e-d7ee-49cf-a5d8-c2f80066f459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.462343 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8wcq\" (UniqueName: \"kubernetes.io/projected/9265013e-d7ee-49cf-a5d8-c2f80066f459-kube-api-access-q8wcq\") pod \"rabbitmq-cell1-server-0\" (UID: \"9265013e-d7ee-49cf-a5d8-c2f80066f459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.467522 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9265013e-d7ee-49cf-a5d8-c2f80066f459-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9265013e-d7ee-49cf-a5d8-c2f80066f459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.478515 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 04:04:41 crc kubenswrapper[4667]: I0131 04:04:41.535261 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:04:42 crc kubenswrapper[4667]: I0131 04:04:42.292558 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 31 04:04:42 crc kubenswrapper[4667]: I0131 04:04:42.296652 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 31 04:04:42 crc kubenswrapper[4667]: I0131 04:04:42.302418 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 31 04:04:42 crc kubenswrapper[4667]: I0131 04:04:42.303630 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-hdrvf" Jan 31 04:04:42 crc kubenswrapper[4667]: I0131 04:04:42.303900 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 31 04:04:42 crc kubenswrapper[4667]: I0131 04:04:42.304288 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 31 04:04:42 crc kubenswrapper[4667]: I0131 04:04:42.314566 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 31 04:04:42 crc kubenswrapper[4667]: I0131 04:04:42.322636 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 31 04:04:42 crc kubenswrapper[4667]: I0131 04:04:42.338934 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 04:04:42 crc kubenswrapper[4667]: I0131 04:04:42.458707 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc6e0899-ca0f-4aac-8510-cf35066a3290-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"fc6e0899-ca0f-4aac-8510-cf35066a3290\") " pod="openstack/openstack-galera-0" Jan 31 04:04:42 crc kubenswrapper[4667]: I0131 04:04:42.458904 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt87r\" (UniqueName: \"kubernetes.io/projected/fc6e0899-ca0f-4aac-8510-cf35066a3290-kube-api-access-vt87r\") pod \"openstack-galera-0\" (UID: \"fc6e0899-ca0f-4aac-8510-cf35066a3290\") " pod="openstack/openstack-galera-0" Jan 31 04:04:42 crc kubenswrapper[4667]: I0131 04:04:42.458951 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc6e0899-ca0f-4aac-8510-cf35066a3290-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"fc6e0899-ca0f-4aac-8510-cf35066a3290\") " pod="openstack/openstack-galera-0" Jan 31 04:04:42 crc kubenswrapper[4667]: I0131 04:04:42.458971 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"fc6e0899-ca0f-4aac-8510-cf35066a3290\") " pod="openstack/openstack-galera-0" Jan 31 04:04:42 crc kubenswrapper[4667]: I0131 04:04:42.458990 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fc6e0899-ca0f-4aac-8510-cf35066a3290-config-data-default\") pod \"openstack-galera-0\" (UID: \"fc6e0899-ca0f-4aac-8510-cf35066a3290\") " pod="openstack/openstack-galera-0" Jan 31 04:04:42 crc kubenswrapper[4667]: I0131 04:04:42.459029 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc6e0899-ca0f-4aac-8510-cf35066a3290-operator-scripts\") pod \"openstack-galera-0\" (UID: \"fc6e0899-ca0f-4aac-8510-cf35066a3290\") " pod="openstack/openstack-galera-0" Jan 31 04:04:42 crc kubenswrapper[4667]: I0131 04:04:42.459049 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fc6e0899-ca0f-4aac-8510-cf35066a3290-kolla-config\") pod \"openstack-galera-0\" (UID: \"fc6e0899-ca0f-4aac-8510-cf35066a3290\") " pod="openstack/openstack-galera-0" Jan 31 04:04:42 crc kubenswrapper[4667]: I0131 04:04:42.459069 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fc6e0899-ca0f-4aac-8510-cf35066a3290-config-data-generated\") pod \"openstack-galera-0\" (UID: \"fc6e0899-ca0f-4aac-8510-cf35066a3290\") " pod="openstack/openstack-galera-0" Jan 31 04:04:42 crc kubenswrapper[4667]: I0131 04:04:42.569579 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt87r\" (UniqueName: \"kubernetes.io/projected/fc6e0899-ca0f-4aac-8510-cf35066a3290-kube-api-access-vt87r\") pod \"openstack-galera-0\" (UID: \"fc6e0899-ca0f-4aac-8510-cf35066a3290\") " pod="openstack/openstack-galera-0" Jan 31 04:04:42 crc kubenswrapper[4667]: I0131 04:04:42.569675 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc6e0899-ca0f-4aac-8510-cf35066a3290-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"fc6e0899-ca0f-4aac-8510-cf35066a3290\") " pod="openstack/openstack-galera-0" Jan 31 04:04:42 crc kubenswrapper[4667]: I0131 04:04:42.569719 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"fc6e0899-ca0f-4aac-8510-cf35066a3290\") " pod="openstack/openstack-galera-0" Jan 31 04:04:42 crc kubenswrapper[4667]: I0131 04:04:42.569760 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fc6e0899-ca0f-4aac-8510-cf35066a3290-config-data-default\") pod \"openstack-galera-0\" (UID: \"fc6e0899-ca0f-4aac-8510-cf35066a3290\") " pod="openstack/openstack-galera-0" Jan 31 04:04:42 crc kubenswrapper[4667]: I0131 04:04:42.569805 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc6e0899-ca0f-4aac-8510-cf35066a3290-operator-scripts\") pod \"openstack-galera-0\" (UID: \"fc6e0899-ca0f-4aac-8510-cf35066a3290\") " pod="openstack/openstack-galera-0" Jan 31 04:04:42 crc kubenswrapper[4667]: I0131 04:04:42.569946 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fc6e0899-ca0f-4aac-8510-cf35066a3290-config-data-generated\") pod \"openstack-galera-0\" (UID: \"fc6e0899-ca0f-4aac-8510-cf35066a3290\") " pod="openstack/openstack-galera-0" Jan 31 04:04:42 crc kubenswrapper[4667]: I0131 04:04:42.569972 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fc6e0899-ca0f-4aac-8510-cf35066a3290-kolla-config\") pod \"openstack-galera-0\" (UID: \"fc6e0899-ca0f-4aac-8510-cf35066a3290\") " pod="openstack/openstack-galera-0" Jan 31 04:04:42 crc kubenswrapper[4667]: I0131 04:04:42.570153 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc6e0899-ca0f-4aac-8510-cf35066a3290-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"fc6e0899-ca0f-4aac-8510-cf35066a3290\") " pod="openstack/openstack-galera-0" Jan 31 04:04:42 crc kubenswrapper[4667]: I0131 04:04:42.578196 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 04:04:42 crc kubenswrapper[4667]: I0131 04:04:42.579232 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc6e0899-ca0f-4aac-8510-cf35066a3290-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"fc6e0899-ca0f-4aac-8510-cf35066a3290\") " pod="openstack/openstack-galera-0" Jan 31 04:04:42 crc kubenswrapper[4667]: I0131 04:04:42.579665 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fc6e0899-ca0f-4aac-8510-cf35066a3290-config-data-generated\") pod \"openstack-galera-0\" (UID: \"fc6e0899-ca0f-4aac-8510-cf35066a3290\") " pod="openstack/openstack-galera-0" Jan 31 04:04:42 crc kubenswrapper[4667]: I0131 04:04:42.580276 4667 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"fc6e0899-ca0f-4aac-8510-cf35066a3290\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-galera-0" Jan 31 04:04:42 crc kubenswrapper[4667]: I0131 04:04:42.581848 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc6e0899-ca0f-4aac-8510-cf35066a3290-operator-scripts\") pod \"openstack-galera-0\" (UID: \"fc6e0899-ca0f-4aac-8510-cf35066a3290\") " pod="openstack/openstack-galera-0" Jan 31 04:04:42 crc kubenswrapper[4667]: I0131 04:04:42.585695 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fc6e0899-ca0f-4aac-8510-cf35066a3290-config-data-default\") pod \"openstack-galera-0\" (UID: \"fc6e0899-ca0f-4aac-8510-cf35066a3290\") " pod="openstack/openstack-galera-0" Jan 31 04:04:42 crc kubenswrapper[4667]: I0131 04:04:42.586759 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fc6e0899-ca0f-4aac-8510-cf35066a3290-kolla-config\") pod \"openstack-galera-0\" (UID: \"fc6e0899-ca0f-4aac-8510-cf35066a3290\") " pod="openstack/openstack-galera-0" Jan 31 04:04:42 crc kubenswrapper[4667]: I0131 04:04:42.593876 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc6e0899-ca0f-4aac-8510-cf35066a3290-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"fc6e0899-ca0f-4aac-8510-cf35066a3290\") " pod="openstack/openstack-galera-0" Jan 31 04:04:42 crc kubenswrapper[4667]: W0131 04:04:42.657259 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf3f1a21_51b1_4282_99e5_ab52084984c0.slice/crio-25e1ef4e3e1310cb5be651e126eabf22f6d35bc656a3d48369cbe95d6a81209a WatchSource:0}: Error finding container 25e1ef4e3e1310cb5be651e126eabf22f6d35bc656a3d48369cbe95d6a81209a: Status 404 returned error can't find the container with id 25e1ef4e3e1310cb5be651e126eabf22f6d35bc656a3d48369cbe95d6a81209a Jan 31 04:04:42 crc kubenswrapper[4667]: I0131 04:04:42.663048 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt87r\" (UniqueName: \"kubernetes.io/projected/fc6e0899-ca0f-4aac-8510-cf35066a3290-kube-api-access-vt87r\") pod \"openstack-galera-0\" (UID: \"fc6e0899-ca0f-4aac-8510-cf35066a3290\") " pod="openstack/openstack-galera-0" Jan 31 04:04:42 crc kubenswrapper[4667]: I0131 04:04:42.669566 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"fc6e0899-ca0f-4aac-8510-cf35066a3290\") " pod="openstack/openstack-galera-0" Jan 31 04:04:42 crc kubenswrapper[4667]: I0131 04:04:42.985658 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.414211 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9265013e-d7ee-49cf-a5d8-c2f80066f459","Type":"ContainerStarted","Data":"3e792268fbb8001fb96f9c2e1920528f28aa7968bc06baab5559142c3b8b94d9"} Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.421574 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bf3f1a21-51b1-4282-99e5-ab52084984c0","Type":"ContainerStarted","Data":"25e1ef4e3e1310cb5be651e126eabf22f6d35bc656a3d48369cbe95d6a81209a"} Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.509662 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.510874 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.540659 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.540879 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-5vmnn" Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.542641 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.542826 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.551337 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.595592 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.696357 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.696402 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.696433 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.696486 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx48j\" (UniqueName: \"kubernetes.io/projected/ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7-kube-api-access-sx48j\") pod \"openstack-cell1-galera-0\" (UID: \"ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.696539 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.696569 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.696634 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.696651 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.756288 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.757301 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.772525 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.772636 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-l8nc8" Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.772866 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.797900 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.797966 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx48j\" (UniqueName: \"kubernetes.io/projected/ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7-kube-api-access-sx48j\") pod \"openstack-cell1-galera-0\" (UID: \"ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.798022 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.798057 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.798087 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.798115 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.798140 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.798158 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.799204 4667 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-cell1-galera-0" Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.800475 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.801984 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.802231 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.802975 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.866181 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.882211 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.899762 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snwzv\" (UniqueName: \"kubernetes.io/projected/23e21efc-a978-4734-9fe2-f210ab9952f5-kube-api-access-snwzv\") pod \"memcached-0\" (UID: \"23e21efc-a978-4734-9fe2-f210ab9952f5\") " pod="openstack/memcached-0" Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.899826 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/23e21efc-a978-4734-9fe2-f210ab9952f5-config-data\") pod \"memcached-0\" (UID: \"23e21efc-a978-4734-9fe2-f210ab9952f5\") " pod="openstack/memcached-0" Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.899948 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/23e21efc-a978-4734-9fe2-f210ab9952f5-memcached-tls-certs\") pod \"memcached-0\" (UID: \"23e21efc-a978-4734-9fe2-f210ab9952f5\") " pod="openstack/memcached-0" Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.899979 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23e21efc-a978-4734-9fe2-f210ab9952f5-combined-ca-bundle\") pod \"memcached-0\" (UID: \"23e21efc-a978-4734-9fe2-f210ab9952f5\") " pod="openstack/memcached-0" Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.900000 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/23e21efc-a978-4734-9fe2-f210ab9952f5-kolla-config\") pod \"memcached-0\" (UID: \"23e21efc-a978-4734-9fe2-f210ab9952f5\") " pod="openstack/memcached-0" Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.905141 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.911169 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 31 04:04:43 crc kubenswrapper[4667]: I0131 04:04:43.959311 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx48j\" (UniqueName: \"kubernetes.io/projected/ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7-kube-api-access-sx48j\") pod \"openstack-cell1-galera-0\" (UID: \"ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7\") " pod="openstack/openstack-cell1-galera-0" Jan 31 04:04:44 crc kubenswrapper[4667]: I0131 04:04:44.001677 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/23e21efc-a978-4734-9fe2-f210ab9952f5-memcached-tls-certs\") pod \"memcached-0\" (UID: \"23e21efc-a978-4734-9fe2-f210ab9952f5\") " pod="openstack/memcached-0" Jan 31 04:04:44 crc kubenswrapper[4667]: I0131 04:04:44.001748 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23e21efc-a978-4734-9fe2-f210ab9952f5-combined-ca-bundle\") pod \"memcached-0\" (UID: \"23e21efc-a978-4734-9fe2-f210ab9952f5\") " pod="openstack/memcached-0" Jan 31 04:04:44 crc kubenswrapper[4667]: I0131 04:04:44.001827 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/23e21efc-a978-4734-9fe2-f210ab9952f5-kolla-config\") pod \"memcached-0\" (UID: \"23e21efc-a978-4734-9fe2-f210ab9952f5\") " pod="openstack/memcached-0" Jan 31 04:04:44 crc kubenswrapper[4667]: I0131 04:04:44.001914 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snwzv\" (UniqueName: \"kubernetes.io/projected/23e21efc-a978-4734-9fe2-f210ab9952f5-kube-api-access-snwzv\") pod \"memcached-0\" (UID: \"23e21efc-a978-4734-9fe2-f210ab9952f5\") " pod="openstack/memcached-0" Jan 31 04:04:44 crc kubenswrapper[4667]: I0131 04:04:44.001944 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/23e21efc-a978-4734-9fe2-f210ab9952f5-config-data\") pod \"memcached-0\" (UID: \"23e21efc-a978-4734-9fe2-f210ab9952f5\") " pod="openstack/memcached-0" Jan 31 04:04:44 crc kubenswrapper[4667]: I0131 04:04:44.003052 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/23e21efc-a978-4734-9fe2-f210ab9952f5-config-data\") pod \"memcached-0\" (UID: \"23e21efc-a978-4734-9fe2-f210ab9952f5\") " pod="openstack/memcached-0" Jan 31 04:04:44 crc kubenswrapper[4667]: I0131 04:04:44.003819 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/23e21efc-a978-4734-9fe2-f210ab9952f5-kolla-config\") pod \"memcached-0\" (UID: \"23e21efc-a978-4734-9fe2-f210ab9952f5\") " pod="openstack/memcached-0" Jan 31 04:04:44 crc kubenswrapper[4667]: I0131 04:04:44.014036 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23e21efc-a978-4734-9fe2-f210ab9952f5-combined-ca-bundle\") pod \"memcached-0\" (UID: \"23e21efc-a978-4734-9fe2-f210ab9952f5\") " pod="openstack/memcached-0" Jan 31 04:04:44 crc kubenswrapper[4667]: I0131 04:04:44.034855 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snwzv\" (UniqueName: \"kubernetes.io/projected/23e21efc-a978-4734-9fe2-f210ab9952f5-kube-api-access-snwzv\") pod \"memcached-0\" (UID: \"23e21efc-a978-4734-9fe2-f210ab9952f5\") " pod="openstack/memcached-0" Jan 31 04:04:44 crc kubenswrapper[4667]: I0131 04:04:44.035373 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/23e21efc-a978-4734-9fe2-f210ab9952f5-memcached-tls-certs\") pod \"memcached-0\" (UID: \"23e21efc-a978-4734-9fe2-f210ab9952f5\") " pod="openstack/memcached-0" Jan 31 04:04:44 crc kubenswrapper[4667]: I0131 04:04:44.104174 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 31 04:04:44 crc kubenswrapper[4667]: I0131 04:04:44.126611 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 31 04:04:44 crc kubenswrapper[4667]: I0131 04:04:44.475733 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fc6e0899-ca0f-4aac-8510-cf35066a3290","Type":"ContainerStarted","Data":"b8ca702beeaca66d4cca2a38e177c43bd35a8223e385c88153ada311ced70dca"} Jan 31 04:04:44 crc kubenswrapper[4667]: I0131 04:04:44.887993 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 31 04:04:45 crc kubenswrapper[4667]: I0131 04:04:45.091486 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 31 04:04:45 crc kubenswrapper[4667]: I0131 04:04:45.530317 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7","Type":"ContainerStarted","Data":"fb4e9befe50173f67a3dca2f0ebf3645483840dae39d505f86a1fd3c389a443c"} Jan 31 04:04:45 crc kubenswrapper[4667]: I0131 04:04:45.551031 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"23e21efc-a978-4734-9fe2-f210ab9952f5","Type":"ContainerStarted","Data":"65ff57f41c53cf83cfd4c1e6233e329881b113faca08f2f0f50c651b581ddf33"} Jan 31 04:04:45 crc kubenswrapper[4667]: I0131 04:04:45.703977 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:04:45 crc kubenswrapper[4667]: I0131 04:04:45.704086 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:04:45 crc kubenswrapper[4667]: I0131 04:04:45.774229 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 04:04:45 crc kubenswrapper[4667]: I0131 04:04:45.776681 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 04:04:45 crc kubenswrapper[4667]: I0131 04:04:45.781164 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-884d4" Jan 31 04:04:45 crc kubenswrapper[4667]: I0131 04:04:45.854225 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-276l4\" (UniqueName: \"kubernetes.io/projected/9cffe8ff-780a-4dad-92ee-175a0a9d6409-kube-api-access-276l4\") pod \"kube-state-metrics-0\" (UID: \"9cffe8ff-780a-4dad-92ee-175a0a9d6409\") " pod="openstack/kube-state-metrics-0" Jan 31 04:04:45 crc kubenswrapper[4667]: I0131 04:04:45.895181 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 04:04:45 crc kubenswrapper[4667]: I0131 04:04:45.956509 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-276l4\" (UniqueName: \"kubernetes.io/projected/9cffe8ff-780a-4dad-92ee-175a0a9d6409-kube-api-access-276l4\") pod \"kube-state-metrics-0\" (UID: \"9cffe8ff-780a-4dad-92ee-175a0a9d6409\") " pod="openstack/kube-state-metrics-0" Jan 31 04:04:45 crc kubenswrapper[4667]: I0131 04:04:45.995567 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-276l4\" (UniqueName: \"kubernetes.io/projected/9cffe8ff-780a-4dad-92ee-175a0a9d6409-kube-api-access-276l4\") pod \"kube-state-metrics-0\" (UID: \"9cffe8ff-780a-4dad-92ee-175a0a9d6409\") " pod="openstack/kube-state-metrics-0" Jan 31 04:04:46 crc kubenswrapper[4667]: I0131 04:04:46.105224 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 04:04:46 crc kubenswrapper[4667]: I0131 04:04:46.801567 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 04:04:46 crc kubenswrapper[4667]: W0131 04:04:46.912089 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cffe8ff_780a_4dad_92ee_175a0a9d6409.slice/crio-eca10cf795fb75819b4b7f39f372bebd70828f104472116a3c88addc1b62e4a6 WatchSource:0}: Error finding container eca10cf795fb75819b4b7f39f372bebd70828f104472116a3c88addc1b62e4a6: Status 404 returned error can't find the container with id eca10cf795fb75819b4b7f39f372bebd70828f104472116a3c88addc1b62e4a6 Jan 31 04:04:47 crc kubenswrapper[4667]: I0131 04:04:47.584042 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9cffe8ff-780a-4dad-92ee-175a0a9d6409","Type":"ContainerStarted","Data":"eca10cf795fb75819b4b7f39f372bebd70828f104472116a3c88addc1b62e4a6"} Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.570057 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-cn9wc"] Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.579768 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cn9wc" Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.584053 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.585754 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-jsnpb" Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.586221 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.593885 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cn9wc"] Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.628322 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-m545l"] Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.630209 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-m545l" Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.653340 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c3c43380-7b18-44fd-98f5-b9016923cdcb-var-log\") pod \"ovn-controller-ovs-m545l\" (UID: \"c3c43380-7b18-44fd-98f5-b9016923cdcb\") " pod="openstack/ovn-controller-ovs-m545l" Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.653397 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c3c43380-7b18-44fd-98f5-b9016923cdcb-etc-ovs\") pod \"ovn-controller-ovs-m545l\" (UID: \"c3c43380-7b18-44fd-98f5-b9016923cdcb\") " pod="openstack/ovn-controller-ovs-m545l" Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.653428 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c3c43380-7b18-44fd-98f5-b9016923cdcb-var-lib\") pod \"ovn-controller-ovs-m545l\" (UID: \"c3c43380-7b18-44fd-98f5-b9016923cdcb\") " pod="openstack/ovn-controller-ovs-m545l" Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.653446 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c3c43380-7b18-44fd-98f5-b9016923cdcb-scripts\") pod \"ovn-controller-ovs-m545l\" (UID: \"c3c43380-7b18-44fd-98f5-b9016923cdcb\") " pod="openstack/ovn-controller-ovs-m545l" Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.653469 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/39c3d98f-a6b1-4558-b565-c9f8c3afa543-var-log-ovn\") pod \"ovn-controller-cn9wc\" (UID: \"39c3d98f-a6b1-4558-b565-c9f8c3afa543\") " pod="openstack/ovn-controller-cn9wc" Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.653503 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdzqv\" (UniqueName: \"kubernetes.io/projected/39c3d98f-a6b1-4558-b565-c9f8c3afa543-kube-api-access-rdzqv\") pod \"ovn-controller-cn9wc\" (UID: \"39c3d98f-a6b1-4558-b565-c9f8c3afa543\") " pod="openstack/ovn-controller-cn9wc" Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.653572 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g54zx\" (UniqueName: \"kubernetes.io/projected/c3c43380-7b18-44fd-98f5-b9016923cdcb-kube-api-access-g54zx\") pod \"ovn-controller-ovs-m545l\" (UID: \"c3c43380-7b18-44fd-98f5-b9016923cdcb\") " pod="openstack/ovn-controller-ovs-m545l" Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.653602 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39c3d98f-a6b1-4558-b565-c9f8c3afa543-combined-ca-bundle\") pod \"ovn-controller-cn9wc\" (UID: \"39c3d98f-a6b1-4558-b565-c9f8c3afa543\") " pod="openstack/ovn-controller-cn9wc" Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.653640 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/39c3d98f-a6b1-4558-b565-c9f8c3afa543-var-run\") pod \"ovn-controller-cn9wc\" (UID: \"39c3d98f-a6b1-4558-b565-c9f8c3afa543\") " pod="openstack/ovn-controller-cn9wc" Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.653671 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/39c3d98f-a6b1-4558-b565-c9f8c3afa543-ovn-controller-tls-certs\") pod \"ovn-controller-cn9wc\" (UID: \"39c3d98f-a6b1-4558-b565-c9f8c3afa543\") " pod="openstack/ovn-controller-cn9wc" Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.653707 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c3c43380-7b18-44fd-98f5-b9016923cdcb-var-run\") pod \"ovn-controller-ovs-m545l\" (UID: \"c3c43380-7b18-44fd-98f5-b9016923cdcb\") " pod="openstack/ovn-controller-ovs-m545l" Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.653746 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/39c3d98f-a6b1-4558-b565-c9f8c3afa543-scripts\") pod \"ovn-controller-cn9wc\" (UID: \"39c3d98f-a6b1-4558-b565-c9f8c3afa543\") " pod="openstack/ovn-controller-cn9wc" Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.653769 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/39c3d98f-a6b1-4558-b565-c9f8c3afa543-var-run-ovn\") pod \"ovn-controller-cn9wc\" (UID: \"39c3d98f-a6b1-4558-b565-c9f8c3afa543\") " pod="openstack/ovn-controller-cn9wc" Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.683534 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-m545l"] Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.760952 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/39c3d98f-a6b1-4558-b565-c9f8c3afa543-scripts\") pod \"ovn-controller-cn9wc\" (UID: \"39c3d98f-a6b1-4558-b565-c9f8c3afa543\") " pod="openstack/ovn-controller-cn9wc" Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.761010 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/39c3d98f-a6b1-4558-b565-c9f8c3afa543-var-run-ovn\") pod \"ovn-controller-cn9wc\" (UID: \"39c3d98f-a6b1-4558-b565-c9f8c3afa543\") " pod="openstack/ovn-controller-cn9wc" Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.761041 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c3c43380-7b18-44fd-98f5-b9016923cdcb-var-log\") pod \"ovn-controller-ovs-m545l\" (UID: \"c3c43380-7b18-44fd-98f5-b9016923cdcb\") " pod="openstack/ovn-controller-ovs-m545l" Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.761070 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c3c43380-7b18-44fd-98f5-b9016923cdcb-etc-ovs\") pod \"ovn-controller-ovs-m545l\" (UID: \"c3c43380-7b18-44fd-98f5-b9016923cdcb\") " pod="openstack/ovn-controller-ovs-m545l" Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.761094 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c3c43380-7b18-44fd-98f5-b9016923cdcb-var-lib\") pod \"ovn-controller-ovs-m545l\" (UID: \"c3c43380-7b18-44fd-98f5-b9016923cdcb\") " pod="openstack/ovn-controller-ovs-m545l" Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.761109 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c3c43380-7b18-44fd-98f5-b9016923cdcb-scripts\") pod \"ovn-controller-ovs-m545l\" (UID: \"c3c43380-7b18-44fd-98f5-b9016923cdcb\") " pod="openstack/ovn-controller-ovs-m545l" Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.761127 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/39c3d98f-a6b1-4558-b565-c9f8c3afa543-var-log-ovn\") pod \"ovn-controller-cn9wc\" (UID: \"39c3d98f-a6b1-4558-b565-c9f8c3afa543\") " pod="openstack/ovn-controller-cn9wc" Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.761157 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdzqv\" (UniqueName: \"kubernetes.io/projected/39c3d98f-a6b1-4558-b565-c9f8c3afa543-kube-api-access-rdzqv\") pod \"ovn-controller-cn9wc\" (UID: \"39c3d98f-a6b1-4558-b565-c9f8c3afa543\") " pod="openstack/ovn-controller-cn9wc" Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.761180 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g54zx\" (UniqueName: \"kubernetes.io/projected/c3c43380-7b18-44fd-98f5-b9016923cdcb-kube-api-access-g54zx\") pod \"ovn-controller-ovs-m545l\" (UID: \"c3c43380-7b18-44fd-98f5-b9016923cdcb\") " pod="openstack/ovn-controller-ovs-m545l" Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.761197 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39c3d98f-a6b1-4558-b565-c9f8c3afa543-combined-ca-bundle\") pod \"ovn-controller-cn9wc\" (UID: \"39c3d98f-a6b1-4558-b565-c9f8c3afa543\") " pod="openstack/ovn-controller-cn9wc" Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.761224 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/39c3d98f-a6b1-4558-b565-c9f8c3afa543-var-run\") pod \"ovn-controller-cn9wc\" (UID: \"39c3d98f-a6b1-4558-b565-c9f8c3afa543\") " pod="openstack/ovn-controller-cn9wc" Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.761251 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/39c3d98f-a6b1-4558-b565-c9f8c3afa543-ovn-controller-tls-certs\") pod \"ovn-controller-cn9wc\" (UID: \"39c3d98f-a6b1-4558-b565-c9f8c3afa543\") " pod="openstack/ovn-controller-cn9wc" Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.761270 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c3c43380-7b18-44fd-98f5-b9016923cdcb-var-run\") pod \"ovn-controller-ovs-m545l\" (UID: \"c3c43380-7b18-44fd-98f5-b9016923cdcb\") " pod="openstack/ovn-controller-ovs-m545l" Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.761782 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c3c43380-7b18-44fd-98f5-b9016923cdcb-var-run\") pod \"ovn-controller-ovs-m545l\" (UID: \"c3c43380-7b18-44fd-98f5-b9016923cdcb\") " pod="openstack/ovn-controller-ovs-m545l" Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.763027 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/39c3d98f-a6b1-4558-b565-c9f8c3afa543-var-log-ovn\") pod \"ovn-controller-cn9wc\" (UID: \"39c3d98f-a6b1-4558-b565-c9f8c3afa543\") " pod="openstack/ovn-controller-cn9wc" Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.766887 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c3c43380-7b18-44fd-98f5-b9016923cdcb-var-log\") pod \"ovn-controller-ovs-m545l\" (UID: \"c3c43380-7b18-44fd-98f5-b9016923cdcb\") " pod="openstack/ovn-controller-ovs-m545l" Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.769971 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/39c3d98f-a6b1-4558-b565-c9f8c3afa543-scripts\") pod \"ovn-controller-cn9wc\" (UID: \"39c3d98f-a6b1-4558-b565-c9f8c3afa543\") " pod="openstack/ovn-controller-cn9wc" Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.770172 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/39c3d98f-a6b1-4558-b565-c9f8c3afa543-var-run-ovn\") pod \"ovn-controller-cn9wc\" (UID: \"39c3d98f-a6b1-4558-b565-c9f8c3afa543\") " pod="openstack/ovn-controller-cn9wc" Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.771832 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c3c43380-7b18-44fd-98f5-b9016923cdcb-etc-ovs\") pod \"ovn-controller-ovs-m545l\" (UID: \"c3c43380-7b18-44fd-98f5-b9016923cdcb\") " pod="openstack/ovn-controller-ovs-m545l" Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.772088 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c3c43380-7b18-44fd-98f5-b9016923cdcb-var-lib\") pod \"ovn-controller-ovs-m545l\" (UID: \"c3c43380-7b18-44fd-98f5-b9016923cdcb\") " pod="openstack/ovn-controller-ovs-m545l" Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.772273 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/39c3d98f-a6b1-4558-b565-c9f8c3afa543-var-run\") pod \"ovn-controller-cn9wc\" (UID: \"39c3d98f-a6b1-4558-b565-c9f8c3afa543\") " pod="openstack/ovn-controller-cn9wc" Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.776164 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c3c43380-7b18-44fd-98f5-b9016923cdcb-scripts\") pod \"ovn-controller-ovs-m545l\" (UID: \"c3c43380-7b18-44fd-98f5-b9016923cdcb\") " pod="openstack/ovn-controller-ovs-m545l" Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.780490 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdzqv\" (UniqueName: \"kubernetes.io/projected/39c3d98f-a6b1-4558-b565-c9f8c3afa543-kube-api-access-rdzqv\") pod \"ovn-controller-cn9wc\" (UID: \"39c3d98f-a6b1-4558-b565-c9f8c3afa543\") " pod="openstack/ovn-controller-cn9wc" Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.790851 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g54zx\" (UniqueName: \"kubernetes.io/projected/c3c43380-7b18-44fd-98f5-b9016923cdcb-kube-api-access-g54zx\") pod \"ovn-controller-ovs-m545l\" (UID: \"c3c43380-7b18-44fd-98f5-b9016923cdcb\") " pod="openstack/ovn-controller-ovs-m545l" Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.792544 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/39c3d98f-a6b1-4558-b565-c9f8c3afa543-ovn-controller-tls-certs\") pod \"ovn-controller-cn9wc\" (UID: \"39c3d98f-a6b1-4558-b565-c9f8c3afa543\") " pod="openstack/ovn-controller-cn9wc" Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.835884 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39c3d98f-a6b1-4558-b565-c9f8c3afa543-combined-ca-bundle\") pod \"ovn-controller-cn9wc\" (UID: \"39c3d98f-a6b1-4558-b565-c9f8c3afa543\") " pod="openstack/ovn-controller-cn9wc" Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.926553 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cn9wc" Jan 31 04:04:49 crc kubenswrapper[4667]: I0131 04:04:49.964621 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-m545l" Jan 31 04:04:50 crc kubenswrapper[4667]: I0131 04:04:50.467022 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cn9wc"] Jan 31 04:04:51 crc kubenswrapper[4667]: I0131 04:04:51.529711 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-m545l"] Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.082327 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-hbhzb"] Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.092811 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-hbhzb" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.096648 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.096987 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.104363 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-hbhzb"] Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.218380 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/73d60e7c-9a2f-4e04-8b13-31956316c5dc-ovs-rundir\") pod \"ovn-controller-metrics-hbhzb\" (UID: \"73d60e7c-9a2f-4e04-8b13-31956316c5dc\") " pod="openstack/ovn-controller-metrics-hbhzb" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.218770 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73d60e7c-9a2f-4e04-8b13-31956316c5dc-combined-ca-bundle\") pod \"ovn-controller-metrics-hbhzb\" (UID: \"73d60e7c-9a2f-4e04-8b13-31956316c5dc\") " pod="openstack/ovn-controller-metrics-hbhzb" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.220064 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/73d60e7c-9a2f-4e04-8b13-31956316c5dc-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-hbhzb\" (UID: \"73d60e7c-9a2f-4e04-8b13-31956316c5dc\") " pod="openstack/ovn-controller-metrics-hbhzb" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.223099 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/73d60e7c-9a2f-4e04-8b13-31956316c5dc-ovn-rundir\") pod \"ovn-controller-metrics-hbhzb\" (UID: \"73d60e7c-9a2f-4e04-8b13-31956316c5dc\") " pod="openstack/ovn-controller-metrics-hbhzb" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.223305 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfscp\" (UniqueName: \"kubernetes.io/projected/73d60e7c-9a2f-4e04-8b13-31956316c5dc-kube-api-access-dfscp\") pod \"ovn-controller-metrics-hbhzb\" (UID: \"73d60e7c-9a2f-4e04-8b13-31956316c5dc\") " pod="openstack/ovn-controller-metrics-hbhzb" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.223420 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73d60e7c-9a2f-4e04-8b13-31956316c5dc-config\") pod \"ovn-controller-metrics-hbhzb\" (UID: \"73d60e7c-9a2f-4e04-8b13-31956316c5dc\") " pod="openstack/ovn-controller-metrics-hbhzb" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.325831 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfscp\" (UniqueName: \"kubernetes.io/projected/73d60e7c-9a2f-4e04-8b13-31956316c5dc-kube-api-access-dfscp\") pod \"ovn-controller-metrics-hbhzb\" (UID: \"73d60e7c-9a2f-4e04-8b13-31956316c5dc\") " pod="openstack/ovn-controller-metrics-hbhzb" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.326281 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73d60e7c-9a2f-4e04-8b13-31956316c5dc-config\") pod \"ovn-controller-metrics-hbhzb\" (UID: \"73d60e7c-9a2f-4e04-8b13-31956316c5dc\") " pod="openstack/ovn-controller-metrics-hbhzb" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.326331 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/73d60e7c-9a2f-4e04-8b13-31956316c5dc-ovs-rundir\") pod \"ovn-controller-metrics-hbhzb\" (UID: \"73d60e7c-9a2f-4e04-8b13-31956316c5dc\") " pod="openstack/ovn-controller-metrics-hbhzb" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.326352 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73d60e7c-9a2f-4e04-8b13-31956316c5dc-combined-ca-bundle\") pod \"ovn-controller-metrics-hbhzb\" (UID: \"73d60e7c-9a2f-4e04-8b13-31956316c5dc\") " pod="openstack/ovn-controller-metrics-hbhzb" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.326410 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/73d60e7c-9a2f-4e04-8b13-31956316c5dc-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-hbhzb\" (UID: \"73d60e7c-9a2f-4e04-8b13-31956316c5dc\") " pod="openstack/ovn-controller-metrics-hbhzb" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.326428 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/73d60e7c-9a2f-4e04-8b13-31956316c5dc-ovn-rundir\") pod \"ovn-controller-metrics-hbhzb\" (UID: \"73d60e7c-9a2f-4e04-8b13-31956316c5dc\") " pod="openstack/ovn-controller-metrics-hbhzb" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.326727 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/73d60e7c-9a2f-4e04-8b13-31956316c5dc-ovn-rundir\") pod \"ovn-controller-metrics-hbhzb\" (UID: \"73d60e7c-9a2f-4e04-8b13-31956316c5dc\") " pod="openstack/ovn-controller-metrics-hbhzb" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.334062 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/73d60e7c-9a2f-4e04-8b13-31956316c5dc-ovs-rundir\") pod \"ovn-controller-metrics-hbhzb\" (UID: \"73d60e7c-9a2f-4e04-8b13-31956316c5dc\") " pod="openstack/ovn-controller-metrics-hbhzb" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.335261 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73d60e7c-9a2f-4e04-8b13-31956316c5dc-config\") pod \"ovn-controller-metrics-hbhzb\" (UID: \"73d60e7c-9a2f-4e04-8b13-31956316c5dc\") " pod="openstack/ovn-controller-metrics-hbhzb" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.341741 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73d60e7c-9a2f-4e04-8b13-31956316c5dc-combined-ca-bundle\") pod \"ovn-controller-metrics-hbhzb\" (UID: \"73d60e7c-9a2f-4e04-8b13-31956316c5dc\") " pod="openstack/ovn-controller-metrics-hbhzb" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.342867 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/73d60e7c-9a2f-4e04-8b13-31956316c5dc-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-hbhzb\" (UID: \"73d60e7c-9a2f-4e04-8b13-31956316c5dc\") " pod="openstack/ovn-controller-metrics-hbhzb" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.351505 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfscp\" (UniqueName: \"kubernetes.io/projected/73d60e7c-9a2f-4e04-8b13-31956316c5dc-kube-api-access-dfscp\") pod \"ovn-controller-metrics-hbhzb\" (UID: \"73d60e7c-9a2f-4e04-8b13-31956316c5dc\") " pod="openstack/ovn-controller-metrics-hbhzb" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.439032 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.446508 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.446625 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.452801 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.453081 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-5hsbz" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.453105 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.453193 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.474780 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-hbhzb" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.636075 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"5b881387-78fb-40db-8985-412849ad9068\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.636165 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b881387-78fb-40db-8985-412849ad9068-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"5b881387-78fb-40db-8985-412849ad9068\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.636191 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b881387-78fb-40db-8985-412849ad9068-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5b881387-78fb-40db-8985-412849ad9068\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.636218 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b881387-78fb-40db-8985-412849ad9068-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"5b881387-78fb-40db-8985-412849ad9068\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.636251 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5b881387-78fb-40db-8985-412849ad9068-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"5b881387-78fb-40db-8985-412849ad9068\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.636273 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b881387-78fb-40db-8985-412849ad9068-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5b881387-78fb-40db-8985-412849ad9068\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.636325 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66f4b\" (UniqueName: \"kubernetes.io/projected/5b881387-78fb-40db-8985-412849ad9068-kube-api-access-66f4b\") pod \"ovsdbserver-nb-0\" (UID: \"5b881387-78fb-40db-8985-412849ad9068\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.636348 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b881387-78fb-40db-8985-412849ad9068-config\") pod \"ovsdbserver-nb-0\" (UID: \"5b881387-78fb-40db-8985-412849ad9068\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.738892 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"5b881387-78fb-40db-8985-412849ad9068\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.738996 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b881387-78fb-40db-8985-412849ad9068-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"5b881387-78fb-40db-8985-412849ad9068\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.739026 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b881387-78fb-40db-8985-412849ad9068-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5b881387-78fb-40db-8985-412849ad9068\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.739059 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b881387-78fb-40db-8985-412849ad9068-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"5b881387-78fb-40db-8985-412849ad9068\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.739094 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5b881387-78fb-40db-8985-412849ad9068-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"5b881387-78fb-40db-8985-412849ad9068\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.739126 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b881387-78fb-40db-8985-412849ad9068-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5b881387-78fb-40db-8985-412849ad9068\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.739209 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66f4b\" (UniqueName: \"kubernetes.io/projected/5b881387-78fb-40db-8985-412849ad9068-kube-api-access-66f4b\") pod \"ovsdbserver-nb-0\" (UID: \"5b881387-78fb-40db-8985-412849ad9068\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.739242 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b881387-78fb-40db-8985-412849ad9068-config\") pod \"ovsdbserver-nb-0\" (UID: \"5b881387-78fb-40db-8985-412849ad9068\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.740154 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5b881387-78fb-40db-8985-412849ad9068-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"5b881387-78fb-40db-8985-412849ad9068\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.740303 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b881387-78fb-40db-8985-412849ad9068-config\") pod \"ovsdbserver-nb-0\" (UID: \"5b881387-78fb-40db-8985-412849ad9068\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.740617 4667 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"5b881387-78fb-40db-8985-412849ad9068\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-nb-0" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.742301 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b881387-78fb-40db-8985-412849ad9068-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"5b881387-78fb-40db-8985-412849ad9068\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.744956 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b881387-78fb-40db-8985-412849ad9068-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5b881387-78fb-40db-8985-412849ad9068\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.745316 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b881387-78fb-40db-8985-412849ad9068-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5b881387-78fb-40db-8985-412849ad9068\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.753978 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b881387-78fb-40db-8985-412849ad9068-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"5b881387-78fb-40db-8985-412849ad9068\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.764641 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"5b881387-78fb-40db-8985-412849ad9068\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.769072 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66f4b\" (UniqueName: \"kubernetes.io/projected/5b881387-78fb-40db-8985-412849ad9068-kube-api-access-66f4b\") pod \"ovsdbserver-nb-0\" (UID: \"5b881387-78fb-40db-8985-412849ad9068\") " pod="openstack/ovsdbserver-nb-0" Jan 31 04:04:52 crc kubenswrapper[4667]: I0131 04:04:52.771965 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 31 04:04:53 crc kubenswrapper[4667]: I0131 04:04:53.228375 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 31 04:04:53 crc kubenswrapper[4667]: I0131 04:04:53.235377 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 31 04:04:53 crc kubenswrapper[4667]: I0131 04:04:53.238584 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 31 04:04:53 crc kubenswrapper[4667]: I0131 04:04:53.238761 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-wfxtm" Jan 31 04:04:53 crc kubenswrapper[4667]: I0131 04:04:53.238908 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 31 04:04:53 crc kubenswrapper[4667]: I0131 04:04:53.239008 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 31 04:04:53 crc kubenswrapper[4667]: I0131 04:04:53.244449 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 31 04:04:53 crc kubenswrapper[4667]: I0131 04:04:53.347919 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:04:53 crc kubenswrapper[4667]: I0131 04:04:53.347977 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:04:53 crc kubenswrapper[4667]: I0131 04:04:53.348189 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:04:53 crc kubenswrapper[4667]: I0131 04:04:53.348261 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7-config\") pod \"ovsdbserver-sb-0\" (UID: \"591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:04:53 crc kubenswrapper[4667]: I0131 04:04:53.348388 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:04:53 crc kubenswrapper[4667]: I0131 04:04:53.348437 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spmp9\" (UniqueName: \"kubernetes.io/projected/591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7-kube-api-access-spmp9\") pod \"ovsdbserver-sb-0\" (UID: \"591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:04:53 crc kubenswrapper[4667]: I0131 04:04:53.348498 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:04:53 crc kubenswrapper[4667]: I0131 04:04:53.348528 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:04:53 crc kubenswrapper[4667]: I0131 04:04:53.449984 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:04:53 crc kubenswrapper[4667]: I0131 04:04:53.450070 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:04:53 crc kubenswrapper[4667]: I0131 04:04:53.450158 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:04:53 crc kubenswrapper[4667]: I0131 04:04:53.450210 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7-config\") pod \"ovsdbserver-sb-0\" (UID: \"591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:04:53 crc kubenswrapper[4667]: I0131 04:04:53.450271 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:04:53 crc kubenswrapper[4667]: I0131 04:04:53.450307 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spmp9\" (UniqueName: \"kubernetes.io/projected/591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7-kube-api-access-spmp9\") pod \"ovsdbserver-sb-0\" (UID: \"591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:04:53 crc kubenswrapper[4667]: I0131 04:04:53.450354 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:04:53 crc kubenswrapper[4667]: I0131 04:04:53.450378 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:04:53 crc kubenswrapper[4667]: I0131 04:04:53.451270 4667 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-sb-0" Jan 31 04:04:53 crc kubenswrapper[4667]: I0131 04:04:53.452035 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:04:53 crc kubenswrapper[4667]: I0131 04:04:53.452241 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7-config\") pod \"ovsdbserver-sb-0\" (UID: \"591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:04:53 crc kubenswrapper[4667]: I0131 04:04:53.453120 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:04:53 crc kubenswrapper[4667]: I0131 04:04:53.457004 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:04:53 crc kubenswrapper[4667]: I0131 04:04:53.457448 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:04:53 crc kubenswrapper[4667]: I0131 04:04:53.467491 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:04:53 crc kubenswrapper[4667]: I0131 04:04:53.467623 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spmp9\" (UniqueName: \"kubernetes.io/projected/591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7-kube-api-access-spmp9\") pod \"ovsdbserver-sb-0\" (UID: \"591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:04:53 crc kubenswrapper[4667]: I0131 04:04:53.476137 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7\") " pod="openstack/ovsdbserver-sb-0" Jan 31 04:04:53 crc kubenswrapper[4667]: I0131 04:04:53.567715 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 31 04:05:00 crc kubenswrapper[4667]: I0131 04:05:00.819559 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cn9wc" event={"ID":"39c3d98f-a6b1-4558-b565-c9f8c3afa543","Type":"ContainerStarted","Data":"d90c29dae5580c26c24eb3e7cb64229b113c9380140b6b85ce86a00da683e79a"} Jan 31 04:05:02 crc kubenswrapper[4667]: I0131 04:05:02.832791 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-m545l" event={"ID":"c3c43380-7b18-44fd-98f5-b9016923cdcb","Type":"ContainerStarted","Data":"e2cd403e76c0abb1fb3fe2a3c37b9dc0f452b94d6955ff9fb99be8c883ef01dc"} Jan 31 04:05:12 crc kubenswrapper[4667]: E0131 04:05:12.430739 4667 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 31 04:05:12 crc kubenswrapper[4667]: E0131 04:05:12.432562 4667 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q8wcq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(9265013e-d7ee-49cf-a5d8-c2f80066f459): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 04:05:12 crc kubenswrapper[4667]: E0131 04:05:12.434777 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="9265013e-d7ee-49cf-a5d8-c2f80066f459" Jan 31 04:05:12 crc kubenswrapper[4667]: E0131 04:05:12.495180 4667 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 31 04:05:12 crc kubenswrapper[4667]: E0131 04:05:12.495351 4667 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vt87r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(fc6e0899-ca0f-4aac-8510-cf35066a3290): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 04:05:12 crc kubenswrapper[4667]: E0131 04:05:12.496555 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="fc6e0899-ca0f-4aac-8510-cf35066a3290" Jan 31 04:05:12 crc kubenswrapper[4667]: E0131 04:05:12.520199 4667 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 31 04:05:12 crc kubenswrapper[4667]: E0131 04:05:12.520364 4667 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-48xpm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(bf3f1a21-51b1-4282-99e5-ab52084984c0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 04:05:12 crc kubenswrapper[4667]: E0131 04:05:12.521608 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="bf3f1a21-51b1-4282-99e5-ab52084984c0" Jan 31 04:05:12 crc kubenswrapper[4667]: E0131 04:05:12.911499 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="bf3f1a21-51b1-4282-99e5-ab52084984c0" Jan 31 04:05:12 crc kubenswrapper[4667]: E0131 04:05:12.911516 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="9265013e-d7ee-49cf-a5d8-c2f80066f459" Jan 31 04:05:12 crc kubenswrapper[4667]: E0131 04:05:12.912109 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="fc6e0899-ca0f-4aac-8510-cf35066a3290" Jan 31 04:05:13 crc kubenswrapper[4667]: E0131 04:05:13.637263 4667 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Jan 31 04:05:13 crc kubenswrapper[4667]: E0131 04:05:13.638960 4667 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n554h559h66h5d4h7ch668h679h667h76h5bhb7h54hdfh5b5h8fh587h5d8h689h65h547h66ch65ch5c7h5c4h58dh5f9h656h59chc7h99h676h554q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-snwzv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(23e21efc-a978-4734-9fe2-f210ab9952f5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 04:05:13 crc kubenswrapper[4667]: E0131 04:05:13.640929 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="23e21efc-a978-4734-9fe2-f210ab9952f5" Jan 31 04:05:13 crc kubenswrapper[4667]: E0131 04:05:13.934542 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="23e21efc-a978-4734-9fe2-f210ab9952f5" Jan 31 04:05:14 crc kubenswrapper[4667]: I0131 04:05:14.361694 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 31 04:05:14 crc kubenswrapper[4667]: E0131 04:05:14.909081 4667 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 31 04:05:14 crc kubenswrapper[4667]: E0131 04:05:14.909496 4667 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5hc4m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-vkbvv_openstack(e167032a-ddb6-4f07-8a1e-9f135c8d73a3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 04:05:14 crc kubenswrapper[4667]: E0131 04:05:14.911514 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-vkbvv" podUID="e167032a-ddb6-4f07-8a1e-9f135c8d73a3" Jan 31 04:05:14 crc kubenswrapper[4667]: E0131 04:05:14.959371 4667 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 31 04:05:14 crc kubenswrapper[4667]: E0131 04:05:14.959588 4667 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sfp6f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-8bs85_openstack(c9fb21e9-25ef-48b0-99cb-67d39aa677d1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 04:05:14 crc kubenswrapper[4667]: E0131 04:05:14.960586 4667 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 31 04:05:14 crc kubenswrapper[4667]: E0131 04:05:14.960785 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-8bs85" podUID="c9fb21e9-25ef-48b0-99cb-67d39aa677d1" Jan 31 04:05:14 crc kubenswrapper[4667]: E0131 04:05:14.960857 4667 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-svfh8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-zcgws_openstack(5d332fef-7a81-4aca-b797-2b3d526c50c9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 04:05:14 crc kubenswrapper[4667]: E0131 04:05:14.962041 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-zcgws" podUID="5d332fef-7a81-4aca-b797-2b3d526c50c9" Jan 31 04:05:14 crc kubenswrapper[4667]: E0131 04:05:14.993362 4667 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 31 04:05:14 crc kubenswrapper[4667]: E0131 04:05:14.993593 4667 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2sc7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-8xpd4_openstack(6dfb8d60-e646-49ab-8886-d751855667aa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 04:05:14 crc kubenswrapper[4667]: E0131 04:05:14.994746 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-8xpd4" podUID="6dfb8d60-e646-49ab-8886-d751855667aa" Jan 31 04:05:15 crc kubenswrapper[4667]: I0131 04:05:15.704401 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:05:15 crc kubenswrapper[4667]: I0131 04:05:15.704930 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:05:15 crc kubenswrapper[4667]: E0131 04:05:15.946164 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-zcgws" podUID="5d332fef-7a81-4aca-b797-2b3d526c50c9" Jan 31 04:05:15 crc kubenswrapper[4667]: E0131 04:05:15.946613 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-8xpd4" podUID="6dfb8d60-e646-49ab-8886-d751855667aa" Jan 31 04:05:16 crc kubenswrapper[4667]: I0131 04:05:16.951484 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"5b881387-78fb-40db-8985-412849ad9068","Type":"ContainerStarted","Data":"22a954ef5cb965ba4cef95c661c2768cc40842f2fad6d8f4a8064a4aa7bc3bf6"} Jan 31 04:05:17 crc kubenswrapper[4667]: I0131 04:05:17.094780 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-vkbvv" Jan 31 04:05:17 crc kubenswrapper[4667]: I0131 04:05:17.094994 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-hbhzb"] Jan 31 04:05:17 crc kubenswrapper[4667]: I0131 04:05:17.100212 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8bs85" Jan 31 04:05:17 crc kubenswrapper[4667]: E0131 04:05:17.131284 4667 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 31 04:05:17 crc kubenswrapper[4667]: E0131 04:05:17.131355 4667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 31 04:05:17 crc kubenswrapper[4667]: E0131 04:05:17.131528 4667 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-276l4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(9cffe8ff-780a-4dad-92ee-175a0a9d6409): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" logger="UnhandledError" Jan 31 04:05:17 crc kubenswrapper[4667]: E0131 04:05:17.132873 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="9cffe8ff-780a-4dad-92ee-175a0a9d6409" Jan 31 04:05:17 crc kubenswrapper[4667]: I0131 04:05:17.208533 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9fb21e9-25ef-48b0-99cb-67d39aa677d1-config\") pod \"c9fb21e9-25ef-48b0-99cb-67d39aa677d1\" (UID: \"c9fb21e9-25ef-48b0-99cb-67d39aa677d1\") " Jan 31 04:05:17 crc kubenswrapper[4667]: I0131 04:05:17.208622 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfp6f\" (UniqueName: \"kubernetes.io/projected/c9fb21e9-25ef-48b0-99cb-67d39aa677d1-kube-api-access-sfp6f\") pod \"c9fb21e9-25ef-48b0-99cb-67d39aa677d1\" (UID: \"c9fb21e9-25ef-48b0-99cb-67d39aa677d1\") " Jan 31 04:05:17 crc kubenswrapper[4667]: I0131 04:05:17.208682 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e167032a-ddb6-4f07-8a1e-9f135c8d73a3-config\") pod \"e167032a-ddb6-4f07-8a1e-9f135c8d73a3\" (UID: \"e167032a-ddb6-4f07-8a1e-9f135c8d73a3\") " Jan 31 04:05:17 crc kubenswrapper[4667]: I0131 04:05:17.208749 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e167032a-ddb6-4f07-8a1e-9f135c8d73a3-dns-svc\") pod \"e167032a-ddb6-4f07-8a1e-9f135c8d73a3\" (UID: \"e167032a-ddb6-4f07-8a1e-9f135c8d73a3\") " Jan 31 04:05:17 crc kubenswrapper[4667]: I0131 04:05:17.208778 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hc4m\" (UniqueName: \"kubernetes.io/projected/e167032a-ddb6-4f07-8a1e-9f135c8d73a3-kube-api-access-5hc4m\") pod \"e167032a-ddb6-4f07-8a1e-9f135c8d73a3\" (UID: \"e167032a-ddb6-4f07-8a1e-9f135c8d73a3\") " Jan 31 04:05:17 crc kubenswrapper[4667]: I0131 04:05:17.212866 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e167032a-ddb6-4f07-8a1e-9f135c8d73a3-config" (OuterVolumeSpecName: "config") pod "e167032a-ddb6-4f07-8a1e-9f135c8d73a3" (UID: "e167032a-ddb6-4f07-8a1e-9f135c8d73a3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:05:17 crc kubenswrapper[4667]: I0131 04:05:17.212869 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e167032a-ddb6-4f07-8a1e-9f135c8d73a3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e167032a-ddb6-4f07-8a1e-9f135c8d73a3" (UID: "e167032a-ddb6-4f07-8a1e-9f135c8d73a3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:05:17 crc kubenswrapper[4667]: I0131 04:05:17.213188 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e167032a-ddb6-4f07-8a1e-9f135c8d73a3-kube-api-access-5hc4m" (OuterVolumeSpecName: "kube-api-access-5hc4m") pod "e167032a-ddb6-4f07-8a1e-9f135c8d73a3" (UID: "e167032a-ddb6-4f07-8a1e-9f135c8d73a3"). InnerVolumeSpecName "kube-api-access-5hc4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:05:17 crc kubenswrapper[4667]: I0131 04:05:17.214246 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9fb21e9-25ef-48b0-99cb-67d39aa677d1-config" (OuterVolumeSpecName: "config") pod "c9fb21e9-25ef-48b0-99cb-67d39aa677d1" (UID: "c9fb21e9-25ef-48b0-99cb-67d39aa677d1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:05:17 crc kubenswrapper[4667]: I0131 04:05:17.216142 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9fb21e9-25ef-48b0-99cb-67d39aa677d1-kube-api-access-sfp6f" (OuterVolumeSpecName: "kube-api-access-sfp6f") pod "c9fb21e9-25ef-48b0-99cb-67d39aa677d1" (UID: "c9fb21e9-25ef-48b0-99cb-67d39aa677d1"). InnerVolumeSpecName "kube-api-access-sfp6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:05:17 crc kubenswrapper[4667]: I0131 04:05:17.252506 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 31 04:05:17 crc kubenswrapper[4667]: I0131 04:05:17.324344 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfp6f\" (UniqueName: \"kubernetes.io/projected/c9fb21e9-25ef-48b0-99cb-67d39aa677d1-kube-api-access-sfp6f\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:17 crc kubenswrapper[4667]: I0131 04:05:17.324381 4667 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e167032a-ddb6-4f07-8a1e-9f135c8d73a3-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:17 crc kubenswrapper[4667]: I0131 04:05:17.324393 4667 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e167032a-ddb6-4f07-8a1e-9f135c8d73a3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:17 crc kubenswrapper[4667]: I0131 04:05:17.324409 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hc4m\" (UniqueName: \"kubernetes.io/projected/e167032a-ddb6-4f07-8a1e-9f135c8d73a3-kube-api-access-5hc4m\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:17 crc kubenswrapper[4667]: I0131 04:05:17.329406 4667 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9fb21e9-25ef-48b0-99cb-67d39aa677d1-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:17 crc kubenswrapper[4667]: I0131 04:05:17.970116 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-8bs85" event={"ID":"c9fb21e9-25ef-48b0-99cb-67d39aa677d1","Type":"ContainerDied","Data":"b578f96fd03a99700ae851b98d388e7fcb2e8607109fd627568dc7f856765558"} Jan 31 04:05:17 crc kubenswrapper[4667]: I0131 04:05:17.970497 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8bs85" Jan 31 04:05:17 crc kubenswrapper[4667]: I0131 04:05:17.974481 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cn9wc" event={"ID":"39c3d98f-a6b1-4558-b565-c9f8c3afa543","Type":"ContainerStarted","Data":"39911459199a33c98af110e8ef4c7700a9837d91065a50a6a0c117b442c5ad5b"} Jan 31 04:05:17 crc kubenswrapper[4667]: I0131 04:05:17.974613 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-cn9wc" Jan 31 04:05:17 crc kubenswrapper[4667]: I0131 04:05:17.977255 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7","Type":"ContainerStarted","Data":"6a418a34eba2f5a45552bc901ab1f4ca9d3a7aeb9db17a32e5868c34643e4a5a"} Jan 31 04:05:17 crc kubenswrapper[4667]: I0131 04:05:17.982325 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-hbhzb" event={"ID":"73d60e7c-9a2f-4e04-8b13-31956316c5dc","Type":"ContainerStarted","Data":"616c3c86ab29727e0f05eb27f8bbe3583a30e369feb5dc26ca1bbb43849306fb"} Jan 31 04:05:17 crc kubenswrapper[4667]: I0131 04:05:17.987929 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7","Type":"ContainerStarted","Data":"60a8d445dc42ca6593548793a23f4fd32cff3c778a4dfe6403252d8b40ccca98"} Jan 31 04:05:17 crc kubenswrapper[4667]: I0131 04:05:17.991384 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-m545l" event={"ID":"c3c43380-7b18-44fd-98f5-b9016923cdcb","Type":"ContainerStarted","Data":"86c7a4f851324b049fc438186a1d625ef5738f0de877d13cda43f974ab575b44"} Jan 31 04:05:17 crc kubenswrapper[4667]: I0131 04:05:17.993339 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-vkbvv" event={"ID":"e167032a-ddb6-4f07-8a1e-9f135c8d73a3","Type":"ContainerDied","Data":"a8c1605c00c8858eaed5009d6271740ac91f66798b2f756a6fa54d019c62ea82"} Jan 31 04:05:17 crc kubenswrapper[4667]: I0131 04:05:17.993351 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-vkbvv" Jan 31 04:05:17 crc kubenswrapper[4667]: E0131 04:05:17.994988 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="9cffe8ff-780a-4dad-92ee-175a0a9d6409" Jan 31 04:05:18 crc kubenswrapper[4667]: I0131 04:05:18.010761 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-cn9wc" podStartSLOduration=11.964965682999999 podStartE2EDuration="29.010732669s" podCreationTimestamp="2026-01-31 04:04:49 +0000 UTC" firstStartedPulling="2026-01-31 04:05:00.15404969 +0000 UTC m=+1023.670385009" lastFinishedPulling="2026-01-31 04:05:17.199816696 +0000 UTC m=+1040.716151995" observedRunningTime="2026-01-31 04:05:18.009895797 +0000 UTC m=+1041.526231096" watchObservedRunningTime="2026-01-31 04:05:18.010732669 +0000 UTC m=+1041.527067968" Jan 31 04:05:18 crc kubenswrapper[4667]: I0131 04:05:18.061527 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8bs85"] Jan 31 04:05:18 crc kubenswrapper[4667]: I0131 04:05:18.077532 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8bs85"] Jan 31 04:05:18 crc kubenswrapper[4667]: I0131 04:05:18.236579 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vkbvv"] Jan 31 04:05:18 crc kubenswrapper[4667]: I0131 04:05:18.274786 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vkbvv"] Jan 31 04:05:19 crc kubenswrapper[4667]: I0131 04:05:19.003801 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"5b881387-78fb-40db-8985-412849ad9068","Type":"ContainerStarted","Data":"ecafc08202471bebbc255d83d4e2fe1c8ecbe7707c2088781279b184462246f3"} Jan 31 04:05:19 crc kubenswrapper[4667]: I0131 04:05:19.006534 4667 generic.go:334] "Generic (PLEG): container finished" podID="c3c43380-7b18-44fd-98f5-b9016923cdcb" containerID="86c7a4f851324b049fc438186a1d625ef5738f0de877d13cda43f974ab575b44" exitCode=0 Jan 31 04:05:19 crc kubenswrapper[4667]: I0131 04:05:19.006955 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-m545l" event={"ID":"c3c43380-7b18-44fd-98f5-b9016923cdcb","Type":"ContainerDied","Data":"86c7a4f851324b049fc438186a1d625ef5738f0de877d13cda43f974ab575b44"} Jan 31 04:05:19 crc kubenswrapper[4667]: I0131 04:05:19.294377 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9fb21e9-25ef-48b0-99cb-67d39aa677d1" path="/var/lib/kubelet/pods/c9fb21e9-25ef-48b0-99cb-67d39aa677d1/volumes" Jan 31 04:05:19 crc kubenswrapper[4667]: I0131 04:05:19.294918 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e167032a-ddb6-4f07-8a1e-9f135c8d73a3" path="/var/lib/kubelet/pods/e167032a-ddb6-4f07-8a1e-9f135c8d73a3/volumes" Jan 31 04:05:21 crc kubenswrapper[4667]: I0131 04:05:21.029552 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"5b881387-78fb-40db-8985-412849ad9068","Type":"ContainerStarted","Data":"bf782de23c2b8a41288bb73f4cdd83f398d829acf7bd260597cf9f8d68d3cdd1"} Jan 31 04:05:21 crc kubenswrapper[4667]: I0131 04:05:21.032256 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7","Type":"ContainerStarted","Data":"cda13d5bdd5f9cd2b5ab22d26afbfa9f1c5b3e34abdaf76777bf01091a5b1632"} Jan 31 04:05:21 crc kubenswrapper[4667]: I0131 04:05:21.032312 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7","Type":"ContainerStarted","Data":"4fd7505faa2f1ccb9f040e48443434470f62d3a5ceb3f78cee5d1b290f5b073c"} Jan 31 04:05:21 crc kubenswrapper[4667]: I0131 04:05:21.035807 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-hbhzb" event={"ID":"73d60e7c-9a2f-4e04-8b13-31956316c5dc","Type":"ContainerStarted","Data":"92fed9d2a3617f4064b436a55fec9018c0d6981e0f547e5f57134bba2fc1b493"} Jan 31 04:05:21 crc kubenswrapper[4667]: I0131 04:05:21.039066 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-m545l" event={"ID":"c3c43380-7b18-44fd-98f5-b9016923cdcb","Type":"ContainerStarted","Data":"2d42efec2254a0c64d0ec8cc11f665f01a0c4363f9efc407c8d75783e8d7989e"} Jan 31 04:05:21 crc kubenswrapper[4667]: I0131 04:05:21.039130 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-m545l" event={"ID":"c3c43380-7b18-44fd-98f5-b9016923cdcb","Type":"ContainerStarted","Data":"8f60634c7416a859caccdff06157f479a966e800fce7a0cc031ab7aa198f02c3"} Jan 31 04:05:21 crc kubenswrapper[4667]: I0131 04:05:21.039950 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-m545l" Jan 31 04:05:21 crc kubenswrapper[4667]: I0131 04:05:21.039986 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-m545l" Jan 31 04:05:21 crc kubenswrapper[4667]: I0131 04:05:21.067876 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=26.746454414 podStartE2EDuration="30.067853565s" podCreationTimestamp="2026-01-31 04:04:51 +0000 UTC" firstStartedPulling="2026-01-31 04:05:16.62797675 +0000 UTC m=+1040.144312089" lastFinishedPulling="2026-01-31 04:05:19.949375941 +0000 UTC m=+1043.465711240" observedRunningTime="2026-01-31 04:05:21.0616385 +0000 UTC m=+1044.577973799" watchObservedRunningTime="2026-01-31 04:05:21.067853565 +0000 UTC m=+1044.584188864" Jan 31 04:05:21 crc kubenswrapper[4667]: I0131 04:05:21.100124 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-m545l" podStartSLOduration=17.346295156 podStartE2EDuration="32.100097358s" podCreationTimestamp="2026-01-31 04:04:49 +0000 UTC" firstStartedPulling="2026-01-31 04:05:02.412101296 +0000 UTC m=+1025.928436605" lastFinishedPulling="2026-01-31 04:05:17.165903508 +0000 UTC m=+1040.682238807" observedRunningTime="2026-01-31 04:05:21.096124893 +0000 UTC m=+1044.612460202" watchObservedRunningTime="2026-01-31 04:05:21.100097358 +0000 UTC m=+1044.616432657" Jan 31 04:05:21 crc kubenswrapper[4667]: I0131 04:05:21.124615 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=26.483724869 podStartE2EDuration="29.124594317s" podCreationTimestamp="2026-01-31 04:04:52 +0000 UTC" firstStartedPulling="2026-01-31 04:05:17.287121027 +0000 UTC m=+1040.803456336" lastFinishedPulling="2026-01-31 04:05:19.927990495 +0000 UTC m=+1043.444325784" observedRunningTime="2026-01-31 04:05:21.116230275 +0000 UTC m=+1044.632565624" watchObservedRunningTime="2026-01-31 04:05:21.124594317 +0000 UTC m=+1044.640929616" Jan 31 04:05:21 crc kubenswrapper[4667]: I0131 04:05:21.136363 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-hbhzb" podStartSLOduration=26.355224468 podStartE2EDuration="29.136342348s" podCreationTimestamp="2026-01-31 04:04:52 +0000 UTC" firstStartedPulling="2026-01-31 04:05:17.17615502 +0000 UTC m=+1040.692490319" lastFinishedPulling="2026-01-31 04:05:19.9572729 +0000 UTC m=+1043.473608199" observedRunningTime="2026-01-31 04:05:21.133346928 +0000 UTC m=+1044.649682237" watchObservedRunningTime="2026-01-31 04:05:21.136342348 +0000 UTC m=+1044.652677647" Jan 31 04:05:21 crc kubenswrapper[4667]: I0131 04:05:21.552062 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-zcgws"] Jan 31 04:05:21 crc kubenswrapper[4667]: I0131 04:05:21.600689 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-zbv9r"] Jan 31 04:05:21 crc kubenswrapper[4667]: I0131 04:05:21.602045 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-zbv9r" Jan 31 04:05:21 crc kubenswrapper[4667]: I0131 04:05:21.612977 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 31 04:05:21 crc kubenswrapper[4667]: I0131 04:05:21.632163 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-zbv9r"] Jan 31 04:05:21 crc kubenswrapper[4667]: I0131 04:05:21.759853 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztxqv\" (UniqueName: \"kubernetes.io/projected/fdd746e5-f56d-479b-9487-ea1bacca12d0-kube-api-access-ztxqv\") pod \"dnsmasq-dns-7f896c8c65-zbv9r\" (UID: \"fdd746e5-f56d-479b-9487-ea1bacca12d0\") " pod="openstack/dnsmasq-dns-7f896c8c65-zbv9r" Jan 31 04:05:21 crc kubenswrapper[4667]: I0131 04:05:21.759936 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdd746e5-f56d-479b-9487-ea1bacca12d0-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-zbv9r\" (UID: \"fdd746e5-f56d-479b-9487-ea1bacca12d0\") " pod="openstack/dnsmasq-dns-7f896c8c65-zbv9r" Jan 31 04:05:21 crc kubenswrapper[4667]: I0131 04:05:21.759963 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdd746e5-f56d-479b-9487-ea1bacca12d0-config\") pod \"dnsmasq-dns-7f896c8c65-zbv9r\" (UID: \"fdd746e5-f56d-479b-9487-ea1bacca12d0\") " pod="openstack/dnsmasq-dns-7f896c8c65-zbv9r" Jan 31 04:05:21 crc kubenswrapper[4667]: I0131 04:05:21.760052 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdd746e5-f56d-479b-9487-ea1bacca12d0-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-zbv9r\" (UID: \"fdd746e5-f56d-479b-9487-ea1bacca12d0\") " pod="openstack/dnsmasq-dns-7f896c8c65-zbv9r" Jan 31 04:05:21 crc kubenswrapper[4667]: I0131 04:05:21.862323 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdd746e5-f56d-479b-9487-ea1bacca12d0-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-zbv9r\" (UID: \"fdd746e5-f56d-479b-9487-ea1bacca12d0\") " pod="openstack/dnsmasq-dns-7f896c8c65-zbv9r" Jan 31 04:05:21 crc kubenswrapper[4667]: I0131 04:05:21.862754 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztxqv\" (UniqueName: \"kubernetes.io/projected/fdd746e5-f56d-479b-9487-ea1bacca12d0-kube-api-access-ztxqv\") pod \"dnsmasq-dns-7f896c8c65-zbv9r\" (UID: \"fdd746e5-f56d-479b-9487-ea1bacca12d0\") " pod="openstack/dnsmasq-dns-7f896c8c65-zbv9r" Jan 31 04:05:21 crc kubenswrapper[4667]: I0131 04:05:21.862801 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdd746e5-f56d-479b-9487-ea1bacca12d0-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-zbv9r\" (UID: \"fdd746e5-f56d-479b-9487-ea1bacca12d0\") " pod="openstack/dnsmasq-dns-7f896c8c65-zbv9r" Jan 31 04:05:21 crc kubenswrapper[4667]: I0131 04:05:21.862830 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdd746e5-f56d-479b-9487-ea1bacca12d0-config\") pod \"dnsmasq-dns-7f896c8c65-zbv9r\" (UID: \"fdd746e5-f56d-479b-9487-ea1bacca12d0\") " pod="openstack/dnsmasq-dns-7f896c8c65-zbv9r" Jan 31 04:05:21 crc kubenswrapper[4667]: I0131 04:05:21.863521 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdd746e5-f56d-479b-9487-ea1bacca12d0-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-zbv9r\" (UID: \"fdd746e5-f56d-479b-9487-ea1bacca12d0\") " pod="openstack/dnsmasq-dns-7f896c8c65-zbv9r" Jan 31 04:05:21 crc kubenswrapper[4667]: I0131 04:05:21.863931 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdd746e5-f56d-479b-9487-ea1bacca12d0-config\") pod \"dnsmasq-dns-7f896c8c65-zbv9r\" (UID: \"fdd746e5-f56d-479b-9487-ea1bacca12d0\") " pod="openstack/dnsmasq-dns-7f896c8c65-zbv9r" Jan 31 04:05:21 crc kubenswrapper[4667]: I0131 04:05:21.864461 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdd746e5-f56d-479b-9487-ea1bacca12d0-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-zbv9r\" (UID: \"fdd746e5-f56d-479b-9487-ea1bacca12d0\") " pod="openstack/dnsmasq-dns-7f896c8c65-zbv9r" Jan 31 04:05:21 crc kubenswrapper[4667]: I0131 04:05:21.912900 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztxqv\" (UniqueName: \"kubernetes.io/projected/fdd746e5-f56d-479b-9487-ea1bacca12d0-kube-api-access-ztxqv\") pod \"dnsmasq-dns-7f896c8c65-zbv9r\" (UID: \"fdd746e5-f56d-479b-9487-ea1bacca12d0\") " pod="openstack/dnsmasq-dns-7f896c8c65-zbv9r" Jan 31 04:05:21 crc kubenswrapper[4667]: I0131 04:05:21.923176 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-zbv9r" Jan 31 04:05:21 crc kubenswrapper[4667]: I0131 04:05:21.984812 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8xpd4"] Jan 31 04:05:22 crc kubenswrapper[4667]: I0131 04:05:22.082919 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-nh6hz"] Jan 31 04:05:22 crc kubenswrapper[4667]: I0131 04:05:22.085996 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-nh6hz" Jan 31 04:05:22 crc kubenswrapper[4667]: I0131 04:05:22.096874 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 31 04:05:22 crc kubenswrapper[4667]: I0131 04:05:22.183766 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqsmc\" (UniqueName: \"kubernetes.io/projected/5c3f2700-4311-41b0-8b3a-20d1dd1db82f-kube-api-access-tqsmc\") pod \"dnsmasq-dns-86db49b7ff-nh6hz\" (UID: \"5c3f2700-4311-41b0-8b3a-20d1dd1db82f\") " pod="openstack/dnsmasq-dns-86db49b7ff-nh6hz" Jan 31 04:05:22 crc kubenswrapper[4667]: I0131 04:05:22.183896 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c3f2700-4311-41b0-8b3a-20d1dd1db82f-config\") pod \"dnsmasq-dns-86db49b7ff-nh6hz\" (UID: \"5c3f2700-4311-41b0-8b3a-20d1dd1db82f\") " pod="openstack/dnsmasq-dns-86db49b7ff-nh6hz" Jan 31 04:05:22 crc kubenswrapper[4667]: I0131 04:05:22.183955 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c3f2700-4311-41b0-8b3a-20d1dd1db82f-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-nh6hz\" (UID: \"5c3f2700-4311-41b0-8b3a-20d1dd1db82f\") " pod="openstack/dnsmasq-dns-86db49b7ff-nh6hz" Jan 31 04:05:22 crc kubenswrapper[4667]: I0131 04:05:22.183974 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c3f2700-4311-41b0-8b3a-20d1dd1db82f-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-nh6hz\" (UID: \"5c3f2700-4311-41b0-8b3a-20d1dd1db82f\") " pod="openstack/dnsmasq-dns-86db49b7ff-nh6hz" Jan 31 04:05:22 crc kubenswrapper[4667]: I0131 04:05:22.184019 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c3f2700-4311-41b0-8b3a-20d1dd1db82f-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-nh6hz\" (UID: \"5c3f2700-4311-41b0-8b3a-20d1dd1db82f\") " pod="openstack/dnsmasq-dns-86db49b7ff-nh6hz" Jan 31 04:05:22 crc kubenswrapper[4667]: I0131 04:05:22.193250 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-nh6hz"] Jan 31 04:05:22 crc kubenswrapper[4667]: I0131 04:05:22.286306 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqsmc\" (UniqueName: \"kubernetes.io/projected/5c3f2700-4311-41b0-8b3a-20d1dd1db82f-kube-api-access-tqsmc\") pod \"dnsmasq-dns-86db49b7ff-nh6hz\" (UID: \"5c3f2700-4311-41b0-8b3a-20d1dd1db82f\") " pod="openstack/dnsmasq-dns-86db49b7ff-nh6hz" Jan 31 04:05:22 crc kubenswrapper[4667]: I0131 04:05:22.286413 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c3f2700-4311-41b0-8b3a-20d1dd1db82f-config\") pod \"dnsmasq-dns-86db49b7ff-nh6hz\" (UID: \"5c3f2700-4311-41b0-8b3a-20d1dd1db82f\") " pod="openstack/dnsmasq-dns-86db49b7ff-nh6hz" Jan 31 04:05:22 crc kubenswrapper[4667]: I0131 04:05:22.286488 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c3f2700-4311-41b0-8b3a-20d1dd1db82f-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-nh6hz\" (UID: \"5c3f2700-4311-41b0-8b3a-20d1dd1db82f\") " pod="openstack/dnsmasq-dns-86db49b7ff-nh6hz" Jan 31 04:05:22 crc kubenswrapper[4667]: I0131 04:05:22.286505 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c3f2700-4311-41b0-8b3a-20d1dd1db82f-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-nh6hz\" (UID: \"5c3f2700-4311-41b0-8b3a-20d1dd1db82f\") " pod="openstack/dnsmasq-dns-86db49b7ff-nh6hz" Jan 31 04:05:22 crc kubenswrapper[4667]: I0131 04:05:22.286568 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c3f2700-4311-41b0-8b3a-20d1dd1db82f-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-nh6hz\" (UID: \"5c3f2700-4311-41b0-8b3a-20d1dd1db82f\") " pod="openstack/dnsmasq-dns-86db49b7ff-nh6hz" Jan 31 04:05:22 crc kubenswrapper[4667]: I0131 04:05:22.288183 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c3f2700-4311-41b0-8b3a-20d1dd1db82f-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-nh6hz\" (UID: \"5c3f2700-4311-41b0-8b3a-20d1dd1db82f\") " pod="openstack/dnsmasq-dns-86db49b7ff-nh6hz" Jan 31 04:05:22 crc kubenswrapper[4667]: I0131 04:05:22.295599 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c3f2700-4311-41b0-8b3a-20d1dd1db82f-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-nh6hz\" (UID: \"5c3f2700-4311-41b0-8b3a-20d1dd1db82f\") " pod="openstack/dnsmasq-dns-86db49b7ff-nh6hz" Jan 31 04:05:22 crc kubenswrapper[4667]: I0131 04:05:22.304803 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c3f2700-4311-41b0-8b3a-20d1dd1db82f-config\") pod \"dnsmasq-dns-86db49b7ff-nh6hz\" (UID: \"5c3f2700-4311-41b0-8b3a-20d1dd1db82f\") " pod="openstack/dnsmasq-dns-86db49b7ff-nh6hz" Jan 31 04:05:22 crc kubenswrapper[4667]: I0131 04:05:22.306410 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-zcgws" Jan 31 04:05:22 crc kubenswrapper[4667]: I0131 04:05:22.306474 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c3f2700-4311-41b0-8b3a-20d1dd1db82f-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-nh6hz\" (UID: \"5c3f2700-4311-41b0-8b3a-20d1dd1db82f\") " pod="openstack/dnsmasq-dns-86db49b7ff-nh6hz" Jan 31 04:05:22 crc kubenswrapper[4667]: I0131 04:05:22.317461 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqsmc\" (UniqueName: \"kubernetes.io/projected/5c3f2700-4311-41b0-8b3a-20d1dd1db82f-kube-api-access-tqsmc\") pod \"dnsmasq-dns-86db49b7ff-nh6hz\" (UID: \"5c3f2700-4311-41b0-8b3a-20d1dd1db82f\") " pod="openstack/dnsmasq-dns-86db49b7ff-nh6hz" Jan 31 04:05:22 crc kubenswrapper[4667]: I0131 04:05:22.391798 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d332fef-7a81-4aca-b797-2b3d526c50c9-dns-svc\") pod \"5d332fef-7a81-4aca-b797-2b3d526c50c9\" (UID: \"5d332fef-7a81-4aca-b797-2b3d526c50c9\") " Jan 31 04:05:22 crc kubenswrapper[4667]: I0131 04:05:22.392224 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d332fef-7a81-4aca-b797-2b3d526c50c9-config\") pod \"5d332fef-7a81-4aca-b797-2b3d526c50c9\" (UID: \"5d332fef-7a81-4aca-b797-2b3d526c50c9\") " Jan 31 04:05:22 crc kubenswrapper[4667]: I0131 04:05:22.392273 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svfh8\" (UniqueName: \"kubernetes.io/projected/5d332fef-7a81-4aca-b797-2b3d526c50c9-kube-api-access-svfh8\") pod \"5d332fef-7a81-4aca-b797-2b3d526c50c9\" (UID: \"5d332fef-7a81-4aca-b797-2b3d526c50c9\") " Jan 31 04:05:22 crc kubenswrapper[4667]: I0131 04:05:22.393626 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d332fef-7a81-4aca-b797-2b3d526c50c9-config" (OuterVolumeSpecName: "config") pod "5d332fef-7a81-4aca-b797-2b3d526c50c9" (UID: "5d332fef-7a81-4aca-b797-2b3d526c50c9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:05:22 crc kubenswrapper[4667]: I0131 04:05:22.394085 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d332fef-7a81-4aca-b797-2b3d526c50c9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5d332fef-7a81-4aca-b797-2b3d526c50c9" (UID: "5d332fef-7a81-4aca-b797-2b3d526c50c9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:05:22 crc kubenswrapper[4667]: I0131 04:05:22.405074 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d332fef-7a81-4aca-b797-2b3d526c50c9-kube-api-access-svfh8" (OuterVolumeSpecName: "kube-api-access-svfh8") pod "5d332fef-7a81-4aca-b797-2b3d526c50c9" (UID: "5d332fef-7a81-4aca-b797-2b3d526c50c9"). InnerVolumeSpecName "kube-api-access-svfh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:05:22 crc kubenswrapper[4667]: I0131 04:05:22.440451 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-nh6hz" Jan 31 04:05:22 crc kubenswrapper[4667]: I0131 04:05:22.494269 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svfh8\" (UniqueName: \"kubernetes.io/projected/5d332fef-7a81-4aca-b797-2b3d526c50c9-kube-api-access-svfh8\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:22 crc kubenswrapper[4667]: I0131 04:05:22.494394 4667 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d332fef-7a81-4aca-b797-2b3d526c50c9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:22 crc kubenswrapper[4667]: I0131 04:05:22.494454 4667 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d332fef-7a81-4aca-b797-2b3d526c50c9-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:22 crc kubenswrapper[4667]: I0131 04:05:22.718675 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-8xpd4" Jan 31 04:05:22 crc kubenswrapper[4667]: I0131 04:05:22.772655 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 31 04:05:22 crc kubenswrapper[4667]: I0131 04:05:22.772714 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 31 04:05:22 crc kubenswrapper[4667]: I0131 04:05:22.801508 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2sc7p\" (UniqueName: \"kubernetes.io/projected/6dfb8d60-e646-49ab-8886-d751855667aa-kube-api-access-2sc7p\") pod \"6dfb8d60-e646-49ab-8886-d751855667aa\" (UID: \"6dfb8d60-e646-49ab-8886-d751855667aa\") " Jan 31 04:05:22 crc kubenswrapper[4667]: I0131 04:05:22.801597 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dfb8d60-e646-49ab-8886-d751855667aa-config\") pod \"6dfb8d60-e646-49ab-8886-d751855667aa\" (UID: \"6dfb8d60-e646-49ab-8886-d751855667aa\") " Jan 31 04:05:22 crc kubenswrapper[4667]: I0131 04:05:22.801628 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6dfb8d60-e646-49ab-8886-d751855667aa-dns-svc\") pod \"6dfb8d60-e646-49ab-8886-d751855667aa\" (UID: \"6dfb8d60-e646-49ab-8886-d751855667aa\") " Jan 31 04:05:22 crc kubenswrapper[4667]: I0131 04:05:22.802020 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dfb8d60-e646-49ab-8886-d751855667aa-config" (OuterVolumeSpecName: "config") pod "6dfb8d60-e646-49ab-8886-d751855667aa" (UID: "6dfb8d60-e646-49ab-8886-d751855667aa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:05:22 crc kubenswrapper[4667]: I0131 04:05:22.802457 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dfb8d60-e646-49ab-8886-d751855667aa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6dfb8d60-e646-49ab-8886-d751855667aa" (UID: "6dfb8d60-e646-49ab-8886-d751855667aa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:05:22 crc kubenswrapper[4667]: I0131 04:05:22.802884 4667 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dfb8d60-e646-49ab-8886-d751855667aa-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:22 crc kubenswrapper[4667]: I0131 04:05:22.802895 4667 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6dfb8d60-e646-49ab-8886-d751855667aa-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:22 crc kubenswrapper[4667]: I0131 04:05:22.805791 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dfb8d60-e646-49ab-8886-d751855667aa-kube-api-access-2sc7p" (OuterVolumeSpecName: "kube-api-access-2sc7p") pod "6dfb8d60-e646-49ab-8886-d751855667aa" (UID: "6dfb8d60-e646-49ab-8886-d751855667aa"). InnerVolumeSpecName "kube-api-access-2sc7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:05:22 crc kubenswrapper[4667]: I0131 04:05:22.841709 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 31 04:05:22 crc kubenswrapper[4667]: W0131 04:05:22.871260 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdd746e5_f56d_479b_9487_ea1bacca12d0.slice/crio-94110e814b41a3cd9b0f6b35e1301c602ebd4088fe9fc4f18bf3123048f869a5 WatchSource:0}: Error finding container 94110e814b41a3cd9b0f6b35e1301c602ebd4088fe9fc4f18bf3123048f869a5: Status 404 returned error can't find the container with id 94110e814b41a3cd9b0f6b35e1301c602ebd4088fe9fc4f18bf3123048f869a5 Jan 31 04:05:22 crc kubenswrapper[4667]: I0131 04:05:22.879039 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-zbv9r"] Jan 31 04:05:22 crc kubenswrapper[4667]: I0131 04:05:22.904801 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2sc7p\" (UniqueName: \"kubernetes.io/projected/6dfb8d60-e646-49ab-8886-d751855667aa-kube-api-access-2sc7p\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:23 crc kubenswrapper[4667]: I0131 04:05:23.090268 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-nh6hz"] Jan 31 04:05:23 crc kubenswrapper[4667]: W0131 04:05:23.097975 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c3f2700_4311_41b0_8b3a_20d1dd1db82f.slice/crio-c7059081f800e139b361cd7fe1f99ce0f74719139120a6f3d616bb4e99af6906 WatchSource:0}: Error finding container c7059081f800e139b361cd7fe1f99ce0f74719139120a6f3d616bb4e99af6906: Status 404 returned error can't find the container with id c7059081f800e139b361cd7fe1f99ce0f74719139120a6f3d616bb4e99af6906 Jan 31 04:05:23 crc kubenswrapper[4667]: I0131 04:05:23.136128 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-zbv9r" event={"ID":"fdd746e5-f56d-479b-9487-ea1bacca12d0","Type":"ContainerStarted","Data":"94110e814b41a3cd9b0f6b35e1301c602ebd4088fe9fc4f18bf3123048f869a5"} Jan 31 04:05:23 crc kubenswrapper[4667]: I0131 04:05:23.137095 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-zcgws" event={"ID":"5d332fef-7a81-4aca-b797-2b3d526c50c9","Type":"ContainerDied","Data":"1d40557eb04f7c5018e3301e52b40885bc68ac6f710ac9ca292b5aa0e5b0c25c"} Jan 31 04:05:23 crc kubenswrapper[4667]: I0131 04:05:23.137133 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-zcgws" Jan 31 04:05:23 crc kubenswrapper[4667]: I0131 04:05:23.138872 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-nh6hz" event={"ID":"5c3f2700-4311-41b0-8b3a-20d1dd1db82f","Type":"ContainerStarted","Data":"c7059081f800e139b361cd7fe1f99ce0f74719139120a6f3d616bb4e99af6906"} Jan 31 04:05:23 crc kubenswrapper[4667]: I0131 04:05:23.140777 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-8xpd4" Jan 31 04:05:23 crc kubenswrapper[4667]: I0131 04:05:23.140765 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-8xpd4" event={"ID":"6dfb8d60-e646-49ab-8886-d751855667aa","Type":"ContainerDied","Data":"8d0d90f361955f2dae2138f7b408c9aa03fc4ac541ddb421ffc5ad6ab9f3736d"} Jan 31 04:05:23 crc kubenswrapper[4667]: I0131 04:05:23.143466 4667 generic.go:334] "Generic (PLEG): container finished" podID="ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7" containerID="60a8d445dc42ca6593548793a23f4fd32cff3c778a4dfe6403252d8b40ccca98" exitCode=0 Jan 31 04:05:23 crc kubenswrapper[4667]: I0131 04:05:23.143615 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7","Type":"ContainerDied","Data":"60a8d445dc42ca6593548793a23f4fd32cff3c778a4dfe6403252d8b40ccca98"} Jan 31 04:05:23 crc kubenswrapper[4667]: I0131 04:05:23.217706 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 31 04:05:23 crc kubenswrapper[4667]: I0131 04:05:23.342623 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-zcgws"] Jan 31 04:05:23 crc kubenswrapper[4667]: I0131 04:05:23.347263 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-zcgws"] Jan 31 04:05:23 crc kubenswrapper[4667]: I0131 04:05:23.397617 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8xpd4"] Jan 31 04:05:23 crc kubenswrapper[4667]: I0131 04:05:23.412164 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8xpd4"] Jan 31 04:05:23 crc kubenswrapper[4667]: I0131 04:05:23.589114 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 31 04:05:23 crc kubenswrapper[4667]: I0131 04:05:23.589668 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 31 04:05:23 crc kubenswrapper[4667]: I0131 04:05:23.647959 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 31 04:05:24 crc kubenswrapper[4667]: I0131 04:05:24.156872 4667 generic.go:334] "Generic (PLEG): container finished" podID="fdd746e5-f56d-479b-9487-ea1bacca12d0" containerID="6132043d3cba5aa66b72a3958bee508454d04b31e08c3467f1657dc0fdc4ddb5" exitCode=0 Jan 31 04:05:24 crc kubenswrapper[4667]: I0131 04:05:24.157004 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-zbv9r" event={"ID":"fdd746e5-f56d-479b-9487-ea1bacca12d0","Type":"ContainerDied","Data":"6132043d3cba5aa66b72a3958bee508454d04b31e08c3467f1657dc0fdc4ddb5"} Jan 31 04:05:24 crc kubenswrapper[4667]: I0131 04:05:24.161568 4667 generic.go:334] "Generic (PLEG): container finished" podID="5c3f2700-4311-41b0-8b3a-20d1dd1db82f" containerID="6087cdfdc5c24ec633d027c8af5cbad2f61895dc9144974a2c24bfba451f1cad" exitCode=0 Jan 31 04:05:24 crc kubenswrapper[4667]: I0131 04:05:24.161634 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-nh6hz" event={"ID":"5c3f2700-4311-41b0-8b3a-20d1dd1db82f","Type":"ContainerDied","Data":"6087cdfdc5c24ec633d027c8af5cbad2f61895dc9144974a2c24bfba451f1cad"} Jan 31 04:05:24 crc kubenswrapper[4667]: I0131 04:05:24.165214 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7","Type":"ContainerStarted","Data":"d3f21d7ae25e7274b025e19e91b399f38494ccd1cce639c601403e2d8fd4aaeb"} Jan 31 04:05:24 crc kubenswrapper[4667]: I0131 04:05:24.170236 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fc6e0899-ca0f-4aac-8510-cf35066a3290","Type":"ContainerStarted","Data":"7b83106faccbb0e284cc6d57e87ed12f85945fa9220b49e918f0638ff972101e"} Jan 31 04:05:24 crc kubenswrapper[4667]: I0131 04:05:24.279946 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=10.591355112 podStartE2EDuration="42.279923022s" podCreationTimestamp="2026-01-31 04:04:42 +0000 UTC" firstStartedPulling="2026-01-31 04:04:44.924591898 +0000 UTC m=+1008.440927197" lastFinishedPulling="2026-01-31 04:05:16.613159768 +0000 UTC m=+1040.129495107" observedRunningTime="2026-01-31 04:05:24.278541935 +0000 UTC m=+1047.794877234" watchObservedRunningTime="2026-01-31 04:05:24.279923022 +0000 UTC m=+1047.796258321" Jan 31 04:05:25 crc kubenswrapper[4667]: I0131 04:05:25.176895 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-zbv9r" event={"ID":"fdd746e5-f56d-479b-9487-ea1bacca12d0","Type":"ContainerStarted","Data":"8475693c29ebb551a3cd93c6ab7c1d3b9d802e4bcfd5bdde2a19a973038f2f51"} Jan 31 04:05:25 crc kubenswrapper[4667]: I0131 04:05:25.178287 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f896c8c65-zbv9r" Jan 31 04:05:25 crc kubenswrapper[4667]: I0131 04:05:25.180445 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-nh6hz" event={"ID":"5c3f2700-4311-41b0-8b3a-20d1dd1db82f","Type":"ContainerStarted","Data":"3d55288e1ad6d29fe991ad00ab9924d42e4bba0be58b1cdb8eae289b38942fee"} Jan 31 04:05:25 crc kubenswrapper[4667]: I0131 04:05:25.196649 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f896c8c65-zbv9r" podStartSLOduration=3.748431362 podStartE2EDuration="4.196632225s" podCreationTimestamp="2026-01-31 04:05:21 +0000 UTC" firstStartedPulling="2026-01-31 04:05:22.874180334 +0000 UTC m=+1046.390515633" lastFinishedPulling="2026-01-31 04:05:23.322381197 +0000 UTC m=+1046.838716496" observedRunningTime="2026-01-31 04:05:25.193251846 +0000 UTC m=+1048.709587145" watchObservedRunningTime="2026-01-31 04:05:25.196632225 +0000 UTC m=+1048.712967524" Jan 31 04:05:25 crc kubenswrapper[4667]: I0131 04:05:25.230597 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-nh6hz" podStartSLOduration=2.785142574 podStartE2EDuration="3.230579834s" podCreationTimestamp="2026-01-31 04:05:22 +0000 UTC" firstStartedPulling="2026-01-31 04:05:23.100526635 +0000 UTC m=+1046.616861934" lastFinishedPulling="2026-01-31 04:05:23.545963895 +0000 UTC m=+1047.062299194" observedRunningTime="2026-01-31 04:05:25.226957208 +0000 UTC m=+1048.743292507" watchObservedRunningTime="2026-01-31 04:05:25.230579834 +0000 UTC m=+1048.746915133" Jan 31 04:05:25 crc kubenswrapper[4667]: I0131 04:05:25.292189 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d332fef-7a81-4aca-b797-2b3d526c50c9" path="/var/lib/kubelet/pods/5d332fef-7a81-4aca-b797-2b3d526c50c9/volumes" Jan 31 04:05:25 crc kubenswrapper[4667]: I0131 04:05:25.292561 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dfb8d60-e646-49ab-8886-d751855667aa" path="/var/lib/kubelet/pods/6dfb8d60-e646-49ab-8886-d751855667aa/volumes" Jan 31 04:05:26 crc kubenswrapper[4667]: I0131 04:05:26.187786 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"23e21efc-a978-4734-9fe2-f210ab9952f5","Type":"ContainerStarted","Data":"f8d0aac7338cb2dd3e24aaf2597583782467a08ba3a0e7150097c9a697d6db22"} Jan 31 04:05:26 crc kubenswrapper[4667]: I0131 04:05:26.188250 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-nh6hz" Jan 31 04:05:26 crc kubenswrapper[4667]: I0131 04:05:26.188495 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 31 04:05:26 crc kubenswrapper[4667]: I0131 04:05:26.204265 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.480800161 podStartE2EDuration="43.204242474s" podCreationTimestamp="2026-01-31 04:04:43 +0000 UTC" firstStartedPulling="2026-01-31 04:04:45.160092512 +0000 UTC m=+1008.676427811" lastFinishedPulling="2026-01-31 04:05:25.883534825 +0000 UTC m=+1049.399870124" observedRunningTime="2026-01-31 04:05:26.202458487 +0000 UTC m=+1049.718793786" watchObservedRunningTime="2026-01-31 04:05:26.204242474 +0000 UTC m=+1049.720577773" Jan 31 04:05:27 crc kubenswrapper[4667]: I0131 04:05:27.196212 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9265013e-d7ee-49cf-a5d8-c2f80066f459","Type":"ContainerStarted","Data":"4acd211c95f8f9b2d57a089d9d7532f112c96d9e87ccf2175d7359401f40ac7e"} Jan 31 04:05:28 crc kubenswrapper[4667]: I0131 04:05:28.205942 4667 generic.go:334] "Generic (PLEG): container finished" podID="fc6e0899-ca0f-4aac-8510-cf35066a3290" containerID="7b83106faccbb0e284cc6d57e87ed12f85945fa9220b49e918f0638ff972101e" exitCode=0 Jan 31 04:05:28 crc kubenswrapper[4667]: I0131 04:05:28.206014 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fc6e0899-ca0f-4aac-8510-cf35066a3290","Type":"ContainerDied","Data":"7b83106faccbb0e284cc6d57e87ed12f85945fa9220b49e918f0638ff972101e"} Jan 31 04:05:28 crc kubenswrapper[4667]: I0131 04:05:28.604425 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 31 04:05:28 crc kubenswrapper[4667]: I0131 04:05:28.843328 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 31 04:05:28 crc kubenswrapper[4667]: I0131 04:05:28.844859 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 31 04:05:28 crc kubenswrapper[4667]: I0131 04:05:28.852175 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 31 04:05:28 crc kubenswrapper[4667]: I0131 04:05:28.852352 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-qfr4c" Jan 31 04:05:28 crc kubenswrapper[4667]: I0131 04:05:28.852470 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 31 04:05:28 crc kubenswrapper[4667]: I0131 04:05:28.852601 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 31 04:05:28 crc kubenswrapper[4667]: I0131 04:05:28.925609 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc5pn\" (UniqueName: \"kubernetes.io/projected/1e5983aa-121c-4344-884a-438181c3ac0d-kube-api-access-jc5pn\") pod \"ovn-northd-0\" (UID: \"1e5983aa-121c-4344-884a-438181c3ac0d\") " pod="openstack/ovn-northd-0" Jan 31 04:05:28 crc kubenswrapper[4667]: I0131 04:05:28.925679 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e5983aa-121c-4344-884a-438181c3ac0d-config\") pod \"ovn-northd-0\" (UID: \"1e5983aa-121c-4344-884a-438181c3ac0d\") " pod="openstack/ovn-northd-0" Jan 31 04:05:28 crc kubenswrapper[4667]: I0131 04:05:28.925703 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1e5983aa-121c-4344-884a-438181c3ac0d-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1e5983aa-121c-4344-884a-438181c3ac0d\") " pod="openstack/ovn-northd-0" Jan 31 04:05:28 crc kubenswrapper[4667]: I0131 04:05:28.925718 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1e5983aa-121c-4344-884a-438181c3ac0d-scripts\") pod \"ovn-northd-0\" (UID: \"1e5983aa-121c-4344-884a-438181c3ac0d\") " pod="openstack/ovn-northd-0" Jan 31 04:05:28 crc kubenswrapper[4667]: I0131 04:05:28.925736 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e5983aa-121c-4344-884a-438181c3ac0d-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1e5983aa-121c-4344-884a-438181c3ac0d\") " pod="openstack/ovn-northd-0" Jan 31 04:05:28 crc kubenswrapper[4667]: I0131 04:05:28.925768 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e5983aa-121c-4344-884a-438181c3ac0d-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1e5983aa-121c-4344-884a-438181c3ac0d\") " pod="openstack/ovn-northd-0" Jan 31 04:05:28 crc kubenswrapper[4667]: I0131 04:05:28.925791 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e5983aa-121c-4344-884a-438181c3ac0d-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1e5983aa-121c-4344-884a-438181c3ac0d\") " pod="openstack/ovn-northd-0" Jan 31 04:05:28 crc kubenswrapper[4667]: I0131 04:05:28.926050 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 31 04:05:29 crc kubenswrapper[4667]: I0131 04:05:29.026819 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e5983aa-121c-4344-884a-438181c3ac0d-config\") pod \"ovn-northd-0\" (UID: \"1e5983aa-121c-4344-884a-438181c3ac0d\") " pod="openstack/ovn-northd-0" Jan 31 04:05:29 crc kubenswrapper[4667]: I0131 04:05:29.026879 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1e5983aa-121c-4344-884a-438181c3ac0d-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1e5983aa-121c-4344-884a-438181c3ac0d\") " pod="openstack/ovn-northd-0" Jan 31 04:05:29 crc kubenswrapper[4667]: I0131 04:05:29.026899 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1e5983aa-121c-4344-884a-438181c3ac0d-scripts\") pod \"ovn-northd-0\" (UID: \"1e5983aa-121c-4344-884a-438181c3ac0d\") " pod="openstack/ovn-northd-0" Jan 31 04:05:29 crc kubenswrapper[4667]: I0131 04:05:29.026919 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e5983aa-121c-4344-884a-438181c3ac0d-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1e5983aa-121c-4344-884a-438181c3ac0d\") " pod="openstack/ovn-northd-0" Jan 31 04:05:29 crc kubenswrapper[4667]: I0131 04:05:29.026955 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e5983aa-121c-4344-884a-438181c3ac0d-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1e5983aa-121c-4344-884a-438181c3ac0d\") " pod="openstack/ovn-northd-0" Jan 31 04:05:29 crc kubenswrapper[4667]: I0131 04:05:29.027104 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e5983aa-121c-4344-884a-438181c3ac0d-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1e5983aa-121c-4344-884a-438181c3ac0d\") " pod="openstack/ovn-northd-0" Jan 31 04:05:29 crc kubenswrapper[4667]: I0131 04:05:29.027822 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e5983aa-121c-4344-884a-438181c3ac0d-config\") pod \"ovn-northd-0\" (UID: \"1e5983aa-121c-4344-884a-438181c3ac0d\") " pod="openstack/ovn-northd-0" Jan 31 04:05:29 crc kubenswrapper[4667]: I0131 04:05:29.027898 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1e5983aa-121c-4344-884a-438181c3ac0d-scripts\") pod \"ovn-northd-0\" (UID: \"1e5983aa-121c-4344-884a-438181c3ac0d\") " pod="openstack/ovn-northd-0" Jan 31 04:05:29 crc kubenswrapper[4667]: I0131 04:05:29.028286 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc5pn\" (UniqueName: \"kubernetes.io/projected/1e5983aa-121c-4344-884a-438181c3ac0d-kube-api-access-jc5pn\") pod \"ovn-northd-0\" (UID: \"1e5983aa-121c-4344-884a-438181c3ac0d\") " pod="openstack/ovn-northd-0" Jan 31 04:05:29 crc kubenswrapper[4667]: I0131 04:05:29.028434 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1e5983aa-121c-4344-884a-438181c3ac0d-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1e5983aa-121c-4344-884a-438181c3ac0d\") " pod="openstack/ovn-northd-0" Jan 31 04:05:29 crc kubenswrapper[4667]: I0131 04:05:29.040932 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e5983aa-121c-4344-884a-438181c3ac0d-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1e5983aa-121c-4344-884a-438181c3ac0d\") " pod="openstack/ovn-northd-0" Jan 31 04:05:29 crc kubenswrapper[4667]: I0131 04:05:29.041454 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e5983aa-121c-4344-884a-438181c3ac0d-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1e5983aa-121c-4344-884a-438181c3ac0d\") " pod="openstack/ovn-northd-0" Jan 31 04:05:29 crc kubenswrapper[4667]: I0131 04:05:29.054160 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc5pn\" (UniqueName: \"kubernetes.io/projected/1e5983aa-121c-4344-884a-438181c3ac0d-kube-api-access-jc5pn\") pod \"ovn-northd-0\" (UID: \"1e5983aa-121c-4344-884a-438181c3ac0d\") " pod="openstack/ovn-northd-0" Jan 31 04:05:29 crc kubenswrapper[4667]: I0131 04:05:29.054915 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e5983aa-121c-4344-884a-438181c3ac0d-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1e5983aa-121c-4344-884a-438181c3ac0d\") " pod="openstack/ovn-northd-0" Jan 31 04:05:29 crc kubenswrapper[4667]: I0131 04:05:29.167406 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 31 04:05:29 crc kubenswrapper[4667]: I0131 04:05:29.225100 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fc6e0899-ca0f-4aac-8510-cf35066a3290","Type":"ContainerStarted","Data":"ae9eae51a52e24fb7c2436a05a43aed83a9ec4d27825f201297465e369c297aa"} Jan 31 04:05:29 crc kubenswrapper[4667]: I0131 04:05:29.231102 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bf3f1a21-51b1-4282-99e5-ab52084984c0","Type":"ContainerStarted","Data":"4efc4bdb3236480020f2071f80c15b872eb6c2e4c82ea5836e3deb47c3e785a5"} Jan 31 04:05:29 crc kubenswrapper[4667]: I0131 04:05:29.259386 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371988.59541 podStartE2EDuration="48.259365967s" podCreationTimestamp="2026-01-31 04:04:41 +0000 UTC" firstStartedPulling="2026-01-31 04:04:43.697737126 +0000 UTC m=+1007.214072425" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:05:29.253465341 +0000 UTC m=+1052.769800640" watchObservedRunningTime="2026-01-31 04:05:29.259365967 +0000 UTC m=+1052.775701266" Jan 31 04:05:29 crc kubenswrapper[4667]: W0131 04:05:29.699154 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e5983aa_121c_4344_884a_438181c3ac0d.slice/crio-6a4e859ca27c9a8872672fd35adb82dc0bfe46a70d03fc0d239c7a2c75ab608a WatchSource:0}: Error finding container 6a4e859ca27c9a8872672fd35adb82dc0bfe46a70d03fc0d239c7a2c75ab608a: Status 404 returned error can't find the container with id 6a4e859ca27c9a8872672fd35adb82dc0bfe46a70d03fc0d239c7a2c75ab608a Jan 31 04:05:29 crc kubenswrapper[4667]: I0131 04:05:29.714550 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 31 04:05:30 crc kubenswrapper[4667]: I0131 04:05:30.238301 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1e5983aa-121c-4344-884a-438181c3ac0d","Type":"ContainerStarted","Data":"6a4e859ca27c9a8872672fd35adb82dc0bfe46a70d03fc0d239c7a2c75ab608a"} Jan 31 04:05:31 crc kubenswrapper[4667]: I0131 04:05:31.926570 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7f896c8c65-zbv9r" Jan 31 04:05:32 crc kubenswrapper[4667]: I0131 04:05:32.253419 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1e5983aa-121c-4344-884a-438181c3ac0d","Type":"ContainerStarted","Data":"51c8e6b35e4945b9457a596112d22eaddab23c23c1dd86c46ad626c1bc9bc193"} Jan 31 04:05:32 crc kubenswrapper[4667]: I0131 04:05:32.253460 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1e5983aa-121c-4344-884a-438181c3ac0d","Type":"ContainerStarted","Data":"c01313a67cde7fe17d99b703ce056851b9c21db874717b2339060244e18bb334"} Jan 31 04:05:32 crc kubenswrapper[4667]: I0131 04:05:32.253624 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 31 04:05:32 crc kubenswrapper[4667]: I0131 04:05:32.278439 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.7046491809999997 podStartE2EDuration="4.278414066s" podCreationTimestamp="2026-01-31 04:05:28 +0000 UTC" firstStartedPulling="2026-01-31 04:05:29.701302154 +0000 UTC m=+1053.217637453" lastFinishedPulling="2026-01-31 04:05:31.275067039 +0000 UTC m=+1054.791402338" observedRunningTime="2026-01-31 04:05:32.27327693 +0000 UTC m=+1055.789612229" watchObservedRunningTime="2026-01-31 04:05:32.278414066 +0000 UTC m=+1055.794749365" Jan 31 04:05:32 crc kubenswrapper[4667]: I0131 04:05:32.442937 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-nh6hz" Jan 31 04:05:32 crc kubenswrapper[4667]: I0131 04:05:32.498395 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-zbv9r"] Jan 31 04:05:32 crc kubenswrapper[4667]: I0131 04:05:32.498827 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f896c8c65-zbv9r" podUID="fdd746e5-f56d-479b-9487-ea1bacca12d0" containerName="dnsmasq-dns" containerID="cri-o://8475693c29ebb551a3cd93c6ab7c1d3b9d802e4bcfd5bdde2a19a973038f2f51" gracePeriod=10 Jan 31 04:05:32 crc kubenswrapper[4667]: I0131 04:05:32.987626 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 31 04:05:32 crc kubenswrapper[4667]: I0131 04:05:32.988032 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 31 04:05:33 crc kubenswrapper[4667]: I0131 04:05:33.269814 4667 generic.go:334] "Generic (PLEG): container finished" podID="fdd746e5-f56d-479b-9487-ea1bacca12d0" containerID="8475693c29ebb551a3cd93c6ab7c1d3b9d802e4bcfd5bdde2a19a973038f2f51" exitCode=0 Jan 31 04:05:33 crc kubenswrapper[4667]: I0131 04:05:33.270875 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-zbv9r" event={"ID":"fdd746e5-f56d-479b-9487-ea1bacca12d0","Type":"ContainerDied","Data":"8475693c29ebb551a3cd93c6ab7c1d3b9d802e4bcfd5bdde2a19a973038f2f51"} Jan 31 04:05:33 crc kubenswrapper[4667]: I0131 04:05:33.518965 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 31 04:05:33 crc kubenswrapper[4667]: I0131 04:05:33.614111 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 31 04:05:33 crc kubenswrapper[4667]: I0131 04:05:33.864158 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-zbv9r" Jan 31 04:05:33 crc kubenswrapper[4667]: I0131 04:05:33.967918 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-b82f-account-create-update-df9j2"] Jan 31 04:05:33 crc kubenswrapper[4667]: E0131 04:05:33.968739 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdd746e5-f56d-479b-9487-ea1bacca12d0" containerName="init" Jan 31 04:05:33 crc kubenswrapper[4667]: I0131 04:05:33.968832 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdd746e5-f56d-479b-9487-ea1bacca12d0" containerName="init" Jan 31 04:05:33 crc kubenswrapper[4667]: E0131 04:05:33.969147 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdd746e5-f56d-479b-9487-ea1bacca12d0" containerName="dnsmasq-dns" Jan 31 04:05:33 crc kubenswrapper[4667]: I0131 04:05:33.969227 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdd746e5-f56d-479b-9487-ea1bacca12d0" containerName="dnsmasq-dns" Jan 31 04:05:33 crc kubenswrapper[4667]: I0131 04:05:33.969683 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdd746e5-f56d-479b-9487-ea1bacca12d0" containerName="dnsmasq-dns" Jan 31 04:05:33 crc kubenswrapper[4667]: I0131 04:05:33.970579 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b82f-account-create-update-df9j2" Jan 31 04:05:33 crc kubenswrapper[4667]: I0131 04:05:33.975521 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.004327 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b82f-account-create-update-df9j2"] Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.026298 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdd746e5-f56d-479b-9487-ea1bacca12d0-dns-svc\") pod \"fdd746e5-f56d-479b-9487-ea1bacca12d0\" (UID: \"fdd746e5-f56d-479b-9487-ea1bacca12d0\") " Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.027824 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdd746e5-f56d-479b-9487-ea1bacca12d0-config\") pod \"fdd746e5-f56d-479b-9487-ea1bacca12d0\" (UID: \"fdd746e5-f56d-479b-9487-ea1bacca12d0\") " Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.028191 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdd746e5-f56d-479b-9487-ea1bacca12d0-ovsdbserver-sb\") pod \"fdd746e5-f56d-479b-9487-ea1bacca12d0\" (UID: \"fdd746e5-f56d-479b-9487-ea1bacca12d0\") " Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.028778 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztxqv\" (UniqueName: \"kubernetes.io/projected/fdd746e5-f56d-479b-9487-ea1bacca12d0-kube-api-access-ztxqv\") pod \"fdd746e5-f56d-479b-9487-ea1bacca12d0\" (UID: \"fdd746e5-f56d-479b-9487-ea1bacca12d0\") " Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.073165 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdd746e5-f56d-479b-9487-ea1bacca12d0-config" (OuterVolumeSpecName: "config") pod "fdd746e5-f56d-479b-9487-ea1bacca12d0" (UID: "fdd746e5-f56d-479b-9487-ea1bacca12d0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.089045 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdd746e5-f56d-479b-9487-ea1bacca12d0-kube-api-access-ztxqv" (OuterVolumeSpecName: "kube-api-access-ztxqv") pod "fdd746e5-f56d-479b-9487-ea1bacca12d0" (UID: "fdd746e5-f56d-479b-9487-ea1bacca12d0"). InnerVolumeSpecName "kube-api-access-ztxqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.108479 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-85mm4"] Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.113665 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.114986 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-85mm4" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.122632 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdd746e5-f56d-479b-9487-ea1bacca12d0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fdd746e5-f56d-479b-9487-ea1bacca12d0" (UID: "fdd746e5-f56d-479b-9487-ea1bacca12d0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.130997 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-85mm4"] Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.131968 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.132354 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.133927 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95e02c93-990d-43de-b11d-db36bc7524a6-operator-scripts\") pod \"keystone-b82f-account-create-update-df9j2\" (UID: \"95e02c93-990d-43de-b11d-db36bc7524a6\") " pod="openstack/keystone-b82f-account-create-update-df9j2" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.134028 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml9cx\" (UniqueName: \"kubernetes.io/projected/95e02c93-990d-43de-b11d-db36bc7524a6-kube-api-access-ml9cx\") pod \"keystone-b82f-account-create-update-df9j2\" (UID: \"95e02c93-990d-43de-b11d-db36bc7524a6\") " pod="openstack/keystone-b82f-account-create-update-df9j2" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.135392 4667 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdd746e5-f56d-479b-9487-ea1bacca12d0-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.135593 4667 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdd746e5-f56d-479b-9487-ea1bacca12d0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.135613 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztxqv\" (UniqueName: \"kubernetes.io/projected/fdd746e5-f56d-479b-9487-ea1bacca12d0-kube-api-access-ztxqv\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.176285 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdd746e5-f56d-479b-9487-ea1bacca12d0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fdd746e5-f56d-479b-9487-ea1bacca12d0" (UID: "fdd746e5-f56d-479b-9487-ea1bacca12d0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.236920 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlhmq\" (UniqueName: \"kubernetes.io/projected/343ee4a5-b9a5-48fe-863c-b668c87c384a-kube-api-access-mlhmq\") pod \"placement-db-create-85mm4\" (UID: \"343ee4a5-b9a5-48fe-863c-b668c87c384a\") " pod="openstack/placement-db-create-85mm4" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.236974 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml9cx\" (UniqueName: \"kubernetes.io/projected/95e02c93-990d-43de-b11d-db36bc7524a6-kube-api-access-ml9cx\") pod \"keystone-b82f-account-create-update-df9j2\" (UID: \"95e02c93-990d-43de-b11d-db36bc7524a6\") " pod="openstack/keystone-b82f-account-create-update-df9j2" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.237060 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-8891-account-create-update-b6rdx"] Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.237063 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/343ee4a5-b9a5-48fe-863c-b668c87c384a-operator-scripts\") pod \"placement-db-create-85mm4\" (UID: \"343ee4a5-b9a5-48fe-863c-b668c87c384a\") " pod="openstack/placement-db-create-85mm4" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.238124 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8891-account-create-update-b6rdx" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.238230 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95e02c93-990d-43de-b11d-db36bc7524a6-operator-scripts\") pod \"keystone-b82f-account-create-update-df9j2\" (UID: \"95e02c93-990d-43de-b11d-db36bc7524a6\") " pod="openstack/keystone-b82f-account-create-update-df9j2" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.238528 4667 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdd746e5-f56d-479b-9487-ea1bacca12d0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.239432 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95e02c93-990d-43de-b11d-db36bc7524a6-operator-scripts\") pod \"keystone-b82f-account-create-update-df9j2\" (UID: \"95e02c93-990d-43de-b11d-db36bc7524a6\") " pod="openstack/keystone-b82f-account-create-update-df9j2" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.244104 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.255470 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8891-account-create-update-b6rdx"] Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.265222 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml9cx\" (UniqueName: \"kubernetes.io/projected/95e02c93-990d-43de-b11d-db36bc7524a6-kube-api-access-ml9cx\") pod \"keystone-b82f-account-create-update-df9j2\" (UID: \"95e02c93-990d-43de-b11d-db36bc7524a6\") " pod="openstack/keystone-b82f-account-create-update-df9j2" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.304466 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-zbv9r" event={"ID":"fdd746e5-f56d-479b-9487-ea1bacca12d0","Type":"ContainerDied","Data":"94110e814b41a3cd9b0f6b35e1301c602ebd4088fe9fc4f18bf3123048f869a5"} Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.304715 4667 scope.go:117] "RemoveContainer" containerID="8475693c29ebb551a3cd93c6ab7c1d3b9d802e4bcfd5bdde2a19a973038f2f51" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.304960 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-zbv9r" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.314521 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9cffe8ff-780a-4dad-92ee-175a0a9d6409","Type":"ContainerStarted","Data":"74b39cb087f94a38ff1a37f6c11923c7ec96ec48c9654afe04b71a791ad2129d"} Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.315398 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b82f-account-create-update-df9j2" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.315452 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.339666 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7744d973-bda8-482c-8c36-d3e9e7a484a4-operator-scripts\") pod \"placement-8891-account-create-update-b6rdx\" (UID: \"7744d973-bda8-482c-8c36-d3e9e7a484a4\") " pod="openstack/placement-8891-account-create-update-b6rdx" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.339761 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlhmq\" (UniqueName: \"kubernetes.io/projected/343ee4a5-b9a5-48fe-863c-b668c87c384a-kube-api-access-mlhmq\") pod \"placement-db-create-85mm4\" (UID: \"343ee4a5-b9a5-48fe-863c-b668c87c384a\") " pod="openstack/placement-db-create-85mm4" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.339817 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/343ee4a5-b9a5-48fe-863c-b668c87c384a-operator-scripts\") pod \"placement-db-create-85mm4\" (UID: \"343ee4a5-b9a5-48fe-863c-b668c87c384a\") " pod="openstack/placement-db-create-85mm4" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.339846 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbmmr\" (UniqueName: \"kubernetes.io/projected/7744d973-bda8-482c-8c36-d3e9e7a484a4-kube-api-access-xbmmr\") pod \"placement-8891-account-create-update-b6rdx\" (UID: \"7744d973-bda8-482c-8c36-d3e9e7a484a4\") " pod="openstack/placement-8891-account-create-update-b6rdx" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.340573 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/343ee4a5-b9a5-48fe-863c-b668c87c384a-operator-scripts\") pod \"placement-db-create-85mm4\" (UID: \"343ee4a5-b9a5-48fe-863c-b668c87c384a\") " pod="openstack/placement-db-create-85mm4" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.367378 4667 scope.go:117] "RemoveContainer" containerID="6132043d3cba5aa66b72a3958bee508454d04b31e08c3467f1657dc0fdc4ddb5" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.379216 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlhmq\" (UniqueName: \"kubernetes.io/projected/343ee4a5-b9a5-48fe-863c-b668c87c384a-kube-api-access-mlhmq\") pod \"placement-db-create-85mm4\" (UID: \"343ee4a5-b9a5-48fe-863c-b668c87c384a\") " pod="openstack/placement-db-create-85mm4" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.391542 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.479412899 podStartE2EDuration="49.391524175s" podCreationTimestamp="2026-01-31 04:04:45 +0000 UTC" firstStartedPulling="2026-01-31 04:04:46.944601243 +0000 UTC m=+1010.460936542" lastFinishedPulling="2026-01-31 04:05:33.856712509 +0000 UTC m=+1057.373047818" observedRunningTime="2026-01-31 04:05:34.36075701 +0000 UTC m=+1057.877092309" watchObservedRunningTime="2026-01-31 04:05:34.391524175 +0000 UTC m=+1057.907859474" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.403576 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-zbv9r"] Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.416229 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-zbv9r"] Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.440831 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbmmr\" (UniqueName: \"kubernetes.io/projected/7744d973-bda8-482c-8c36-d3e9e7a484a4-kube-api-access-xbmmr\") pod \"placement-8891-account-create-update-b6rdx\" (UID: \"7744d973-bda8-482c-8c36-d3e9e7a484a4\") " pod="openstack/placement-8891-account-create-update-b6rdx" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.441149 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7744d973-bda8-482c-8c36-d3e9e7a484a4-operator-scripts\") pod \"placement-8891-account-create-update-b6rdx\" (UID: \"7744d973-bda8-482c-8c36-d3e9e7a484a4\") " pod="openstack/placement-8891-account-create-update-b6rdx" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.445049 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7744d973-bda8-482c-8c36-d3e9e7a484a4-operator-scripts\") pod \"placement-8891-account-create-update-b6rdx\" (UID: \"7744d973-bda8-482c-8c36-d3e9e7a484a4\") " pod="openstack/placement-8891-account-create-update-b6rdx" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.448872 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-cwd7k"] Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.449890 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cwd7k" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.466211 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-cwd7k"] Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.476297 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-85mm4" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.479187 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbmmr\" (UniqueName: \"kubernetes.io/projected/7744d973-bda8-482c-8c36-d3e9e7a484a4-kube-api-access-xbmmr\") pod \"placement-8891-account-create-update-b6rdx\" (UID: \"7744d973-bda8-482c-8c36-d3e9e7a484a4\") " pod="openstack/placement-8891-account-create-update-b6rdx" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.545908 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxxht\" (UniqueName: \"kubernetes.io/projected/4a118818-9c6a-4477-8a09-84e63dd51c45-kube-api-access-mxxht\") pod \"glance-db-create-cwd7k\" (UID: \"4a118818-9c6a-4477-8a09-84e63dd51c45\") " pod="openstack/glance-db-create-cwd7k" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.546193 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a118818-9c6a-4477-8a09-84e63dd51c45-operator-scripts\") pod \"glance-db-create-cwd7k\" (UID: \"4a118818-9c6a-4477-8a09-84e63dd51c45\") " pod="openstack/glance-db-create-cwd7k" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.562889 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8891-account-create-update-b6rdx" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.570979 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-95c8-account-create-update-kdk2t"] Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.572017 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-95c8-account-create-update-kdk2t" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.574624 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.590545 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-95c8-account-create-update-kdk2t"] Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.649867 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w522j\" (UniqueName: \"kubernetes.io/projected/342ccb2d-5b9e-433a-a8f8-9d074ee0887f-kube-api-access-w522j\") pod \"glance-95c8-account-create-update-kdk2t\" (UID: \"342ccb2d-5b9e-433a-a8f8-9d074ee0887f\") " pod="openstack/glance-95c8-account-create-update-kdk2t" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.649939 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/342ccb2d-5b9e-433a-a8f8-9d074ee0887f-operator-scripts\") pod \"glance-95c8-account-create-update-kdk2t\" (UID: \"342ccb2d-5b9e-433a-a8f8-9d074ee0887f\") " pod="openstack/glance-95c8-account-create-update-kdk2t" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.650003 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxxht\" (UniqueName: \"kubernetes.io/projected/4a118818-9c6a-4477-8a09-84e63dd51c45-kube-api-access-mxxht\") pod \"glance-db-create-cwd7k\" (UID: \"4a118818-9c6a-4477-8a09-84e63dd51c45\") " pod="openstack/glance-db-create-cwd7k" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.650036 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a118818-9c6a-4477-8a09-84e63dd51c45-operator-scripts\") pod \"glance-db-create-cwd7k\" (UID: \"4a118818-9c6a-4477-8a09-84e63dd51c45\") " pod="openstack/glance-db-create-cwd7k" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.650774 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a118818-9c6a-4477-8a09-84e63dd51c45-operator-scripts\") pod \"glance-db-create-cwd7k\" (UID: \"4a118818-9c6a-4477-8a09-84e63dd51c45\") " pod="openstack/glance-db-create-cwd7k" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.673352 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxxht\" (UniqueName: \"kubernetes.io/projected/4a118818-9c6a-4477-8a09-84e63dd51c45-kube-api-access-mxxht\") pod \"glance-db-create-cwd7k\" (UID: \"4a118818-9c6a-4477-8a09-84e63dd51c45\") " pod="openstack/glance-db-create-cwd7k" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.755016 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w522j\" (UniqueName: \"kubernetes.io/projected/342ccb2d-5b9e-433a-a8f8-9d074ee0887f-kube-api-access-w522j\") pod \"glance-95c8-account-create-update-kdk2t\" (UID: \"342ccb2d-5b9e-433a-a8f8-9d074ee0887f\") " pod="openstack/glance-95c8-account-create-update-kdk2t" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.755060 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/342ccb2d-5b9e-433a-a8f8-9d074ee0887f-operator-scripts\") pod \"glance-95c8-account-create-update-kdk2t\" (UID: \"342ccb2d-5b9e-433a-a8f8-9d074ee0887f\") " pod="openstack/glance-95c8-account-create-update-kdk2t" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.755749 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/342ccb2d-5b9e-433a-a8f8-9d074ee0887f-operator-scripts\") pod \"glance-95c8-account-create-update-kdk2t\" (UID: \"342ccb2d-5b9e-433a-a8f8-9d074ee0887f\") " pod="openstack/glance-95c8-account-create-update-kdk2t" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.777178 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cwd7k" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.782145 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.782243 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w522j\" (UniqueName: \"kubernetes.io/projected/342ccb2d-5b9e-433a-a8f8-9d074ee0887f-kube-api-access-w522j\") pod \"glance-95c8-account-create-update-kdk2t\" (UID: \"342ccb2d-5b9e-433a-a8f8-9d074ee0887f\") " pod="openstack/glance-95c8-account-create-update-kdk2t" Jan 31 04:05:34 crc kubenswrapper[4667]: I0131 04:05:34.930562 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-95c8-account-create-update-kdk2t" Jan 31 04:05:35 crc kubenswrapper[4667]: I0131 04:05:35.044750 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b82f-account-create-update-df9j2"] Jan 31 04:05:35 crc kubenswrapper[4667]: W0131 04:05:35.054576 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95e02c93_990d_43de_b11d_db36bc7524a6.slice/crio-d6813a111e2ebc3cf85ffb6535d75f278b9ad8dcf26cba3265274775b6494d32 WatchSource:0}: Error finding container d6813a111e2ebc3cf85ffb6535d75f278b9ad8dcf26cba3265274775b6494d32: Status 404 returned error can't find the container with id d6813a111e2ebc3cf85ffb6535d75f278b9ad8dcf26cba3265274775b6494d32 Jan 31 04:05:35 crc kubenswrapper[4667]: I0131 04:05:35.307397 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdd746e5-f56d-479b-9487-ea1bacca12d0" path="/var/lib/kubelet/pods/fdd746e5-f56d-479b-9487-ea1bacca12d0/volumes" Jan 31 04:05:35 crc kubenswrapper[4667]: I0131 04:05:35.346686 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b82f-account-create-update-df9j2" event={"ID":"95e02c93-990d-43de-b11d-db36bc7524a6","Type":"ContainerStarted","Data":"c6828019b6bb52b60ac5d4262021b0664d0f7e88a87ed5906e28173029b8ffcb"} Jan 31 04:05:35 crc kubenswrapper[4667]: I0131 04:05:35.346730 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b82f-account-create-update-df9j2" event={"ID":"95e02c93-990d-43de-b11d-db36bc7524a6","Type":"ContainerStarted","Data":"d6813a111e2ebc3cf85ffb6535d75f278b9ad8dcf26cba3265274775b6494d32"} Jan 31 04:05:35 crc kubenswrapper[4667]: I0131 04:05:35.353567 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8891-account-create-update-b6rdx"] Jan 31 04:05:35 crc kubenswrapper[4667]: I0131 04:05:35.389209 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-b82f-account-create-update-df9j2" podStartSLOduration=2.389188011 podStartE2EDuration="2.389188011s" podCreationTimestamp="2026-01-31 04:05:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:05:35.384013024 +0000 UTC m=+1058.900348313" watchObservedRunningTime="2026-01-31 04:05:35.389188011 +0000 UTC m=+1058.905523310" Jan 31 04:05:35 crc kubenswrapper[4667]: I0131 04:05:35.437985 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-85mm4"] Jan 31 04:05:35 crc kubenswrapper[4667]: I0131 04:05:35.450769 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-cwd7k"] Jan 31 04:05:35 crc kubenswrapper[4667]: W0131 04:05:35.460242 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a118818_9c6a_4477_8a09_84e63dd51c45.slice/crio-87d364e8a7f2ccfcc1fb9a65bf317839bf23cab11921ce3bf1554ceec300fe18 WatchSource:0}: Error finding container 87d364e8a7f2ccfcc1fb9a65bf317839bf23cab11921ce3bf1554ceec300fe18: Status 404 returned error can't find the container with id 87d364e8a7f2ccfcc1fb9a65bf317839bf23cab11921ce3bf1554ceec300fe18 Jan 31 04:05:35 crc kubenswrapper[4667]: I0131 04:05:35.545995 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 31 04:05:35 crc kubenswrapper[4667]: I0131 04:05:35.639533 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-95c8-account-create-update-kdk2t"] Jan 31 04:05:35 crc kubenswrapper[4667]: W0131 04:05:35.657211 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod342ccb2d_5b9e_433a_a8f8_9d074ee0887f.slice/crio-7bf221bf77f0d9c6e49d00b453dc2cbd0ccc2806e0516db1c870635fcd0904c6 WatchSource:0}: Error finding container 7bf221bf77f0d9c6e49d00b453dc2cbd0ccc2806e0516db1c870635fcd0904c6: Status 404 returned error can't find the container with id 7bf221bf77f0d9c6e49d00b453dc2cbd0ccc2806e0516db1c870635fcd0904c6 Jan 31 04:05:36 crc kubenswrapper[4667]: I0131 04:05:36.191608 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-gdsvg"] Jan 31 04:05:36 crc kubenswrapper[4667]: I0131 04:05:36.202374 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-gdsvg" Jan 31 04:05:36 crc kubenswrapper[4667]: I0131 04:05:36.214542 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-gdsvg"] Jan 31 04:05:36 crc kubenswrapper[4667]: I0131 04:05:36.312009 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb505db2-7884-475e-9bed-884650bfaeb8-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-gdsvg\" (UID: \"eb505db2-7884-475e-9bed-884650bfaeb8\") " pod="openstack/dnsmasq-dns-698758b865-gdsvg" Jan 31 04:05:36 crc kubenswrapper[4667]: I0131 04:05:36.312075 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb505db2-7884-475e-9bed-884650bfaeb8-config\") pod \"dnsmasq-dns-698758b865-gdsvg\" (UID: \"eb505db2-7884-475e-9bed-884650bfaeb8\") " pod="openstack/dnsmasq-dns-698758b865-gdsvg" Jan 31 04:05:36 crc kubenswrapper[4667]: I0131 04:05:36.312152 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb505db2-7884-475e-9bed-884650bfaeb8-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-gdsvg\" (UID: \"eb505db2-7884-475e-9bed-884650bfaeb8\") " pod="openstack/dnsmasq-dns-698758b865-gdsvg" Jan 31 04:05:36 crc kubenswrapper[4667]: I0131 04:05:36.312175 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb505db2-7884-475e-9bed-884650bfaeb8-dns-svc\") pod \"dnsmasq-dns-698758b865-gdsvg\" (UID: \"eb505db2-7884-475e-9bed-884650bfaeb8\") " pod="openstack/dnsmasq-dns-698758b865-gdsvg" Jan 31 04:05:36 crc kubenswrapper[4667]: I0131 04:05:36.312203 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjk58\" (UniqueName: \"kubernetes.io/projected/eb505db2-7884-475e-9bed-884650bfaeb8-kube-api-access-gjk58\") pod \"dnsmasq-dns-698758b865-gdsvg\" (UID: \"eb505db2-7884-475e-9bed-884650bfaeb8\") " pod="openstack/dnsmasq-dns-698758b865-gdsvg" Jan 31 04:05:36 crc kubenswrapper[4667]: I0131 04:05:36.388980 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-95c8-account-create-update-kdk2t" event={"ID":"342ccb2d-5b9e-433a-a8f8-9d074ee0887f","Type":"ContainerStarted","Data":"4907234e154b4af8f72d149a7fa846ab93ea3ae1bfe9f148e135a9abfc1476ec"} Jan 31 04:05:36 crc kubenswrapper[4667]: I0131 04:05:36.389022 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-95c8-account-create-update-kdk2t" event={"ID":"342ccb2d-5b9e-433a-a8f8-9d074ee0887f","Type":"ContainerStarted","Data":"7bf221bf77f0d9c6e49d00b453dc2cbd0ccc2806e0516db1c870635fcd0904c6"} Jan 31 04:05:36 crc kubenswrapper[4667]: I0131 04:05:36.404989 4667 generic.go:334] "Generic (PLEG): container finished" podID="4a118818-9c6a-4477-8a09-84e63dd51c45" containerID="c9a697fcaa4cf6adf6c72ef0e2c7efa2f9b051de6210afd6a7295fb6aa211d05" exitCode=0 Jan 31 04:05:36 crc kubenswrapper[4667]: I0131 04:05:36.405092 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-cwd7k" event={"ID":"4a118818-9c6a-4477-8a09-84e63dd51c45","Type":"ContainerDied","Data":"c9a697fcaa4cf6adf6c72ef0e2c7efa2f9b051de6210afd6a7295fb6aa211d05"} Jan 31 04:05:36 crc kubenswrapper[4667]: I0131 04:05:36.405118 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-cwd7k" event={"ID":"4a118818-9c6a-4477-8a09-84e63dd51c45","Type":"ContainerStarted","Data":"87d364e8a7f2ccfcc1fb9a65bf317839bf23cab11921ce3bf1554ceec300fe18"} Jan 31 04:05:36 crc kubenswrapper[4667]: I0131 04:05:36.406062 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-95c8-account-create-update-kdk2t" podStartSLOduration=2.406044596 podStartE2EDuration="2.406044596s" podCreationTimestamp="2026-01-31 04:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:05:36.405392748 +0000 UTC m=+1059.921728037" watchObservedRunningTime="2026-01-31 04:05:36.406044596 +0000 UTC m=+1059.922379895" Jan 31 04:05:36 crc kubenswrapper[4667]: I0131 04:05:36.409727 4667 generic.go:334] "Generic (PLEG): container finished" podID="95e02c93-990d-43de-b11d-db36bc7524a6" containerID="c6828019b6bb52b60ac5d4262021b0664d0f7e88a87ed5906e28173029b8ffcb" exitCode=0 Jan 31 04:05:36 crc kubenswrapper[4667]: I0131 04:05:36.409793 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b82f-account-create-update-df9j2" event={"ID":"95e02c93-990d-43de-b11d-db36bc7524a6","Type":"ContainerDied","Data":"c6828019b6bb52b60ac5d4262021b0664d0f7e88a87ed5906e28173029b8ffcb"} Jan 31 04:05:36 crc kubenswrapper[4667]: I0131 04:05:36.413440 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb505db2-7884-475e-9bed-884650bfaeb8-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-gdsvg\" (UID: \"eb505db2-7884-475e-9bed-884650bfaeb8\") " pod="openstack/dnsmasq-dns-698758b865-gdsvg" Jan 31 04:05:36 crc kubenswrapper[4667]: I0131 04:05:36.413468 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb505db2-7884-475e-9bed-884650bfaeb8-dns-svc\") pod \"dnsmasq-dns-698758b865-gdsvg\" (UID: \"eb505db2-7884-475e-9bed-884650bfaeb8\") " pod="openstack/dnsmasq-dns-698758b865-gdsvg" Jan 31 04:05:36 crc kubenswrapper[4667]: I0131 04:05:36.413498 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjk58\" (UniqueName: \"kubernetes.io/projected/eb505db2-7884-475e-9bed-884650bfaeb8-kube-api-access-gjk58\") pod \"dnsmasq-dns-698758b865-gdsvg\" (UID: \"eb505db2-7884-475e-9bed-884650bfaeb8\") " pod="openstack/dnsmasq-dns-698758b865-gdsvg" Jan 31 04:05:36 crc kubenswrapper[4667]: I0131 04:05:36.413537 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb505db2-7884-475e-9bed-884650bfaeb8-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-gdsvg\" (UID: \"eb505db2-7884-475e-9bed-884650bfaeb8\") " pod="openstack/dnsmasq-dns-698758b865-gdsvg" Jan 31 04:05:36 crc kubenswrapper[4667]: I0131 04:05:36.413576 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb505db2-7884-475e-9bed-884650bfaeb8-config\") pod \"dnsmasq-dns-698758b865-gdsvg\" (UID: \"eb505db2-7884-475e-9bed-884650bfaeb8\") " pod="openstack/dnsmasq-dns-698758b865-gdsvg" Jan 31 04:05:36 crc kubenswrapper[4667]: I0131 04:05:36.414497 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb505db2-7884-475e-9bed-884650bfaeb8-config\") pod \"dnsmasq-dns-698758b865-gdsvg\" (UID: \"eb505db2-7884-475e-9bed-884650bfaeb8\") " pod="openstack/dnsmasq-dns-698758b865-gdsvg" Jan 31 04:05:36 crc kubenswrapper[4667]: I0131 04:05:36.415035 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb505db2-7884-475e-9bed-884650bfaeb8-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-gdsvg\" (UID: \"eb505db2-7884-475e-9bed-884650bfaeb8\") " pod="openstack/dnsmasq-dns-698758b865-gdsvg" Jan 31 04:05:36 crc kubenswrapper[4667]: I0131 04:05:36.415604 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb505db2-7884-475e-9bed-884650bfaeb8-dns-svc\") pod \"dnsmasq-dns-698758b865-gdsvg\" (UID: \"eb505db2-7884-475e-9bed-884650bfaeb8\") " pod="openstack/dnsmasq-dns-698758b865-gdsvg" Jan 31 04:05:36 crc kubenswrapper[4667]: I0131 04:05:36.416452 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb505db2-7884-475e-9bed-884650bfaeb8-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-gdsvg\" (UID: \"eb505db2-7884-475e-9bed-884650bfaeb8\") " pod="openstack/dnsmasq-dns-698758b865-gdsvg" Jan 31 04:05:36 crc kubenswrapper[4667]: I0131 04:05:36.422329 4667 generic.go:334] "Generic (PLEG): container finished" podID="7744d973-bda8-482c-8c36-d3e9e7a484a4" containerID="dac141f6b47a01ebadcfe1a91761940ab1b45d3096227a2682ccdf6430c0caf8" exitCode=0 Jan 31 04:05:36 crc kubenswrapper[4667]: I0131 04:05:36.422444 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8891-account-create-update-b6rdx" event={"ID":"7744d973-bda8-482c-8c36-d3e9e7a484a4","Type":"ContainerDied","Data":"dac141f6b47a01ebadcfe1a91761940ab1b45d3096227a2682ccdf6430c0caf8"} Jan 31 04:05:36 crc kubenswrapper[4667]: I0131 04:05:36.422475 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8891-account-create-update-b6rdx" event={"ID":"7744d973-bda8-482c-8c36-d3e9e7a484a4","Type":"ContainerStarted","Data":"32304162887edbf64eddebfffd4a814bca250537f409346f51d06e9513404a9c"} Jan 31 04:05:36 crc kubenswrapper[4667]: I0131 04:05:36.445554 4667 generic.go:334] "Generic (PLEG): container finished" podID="343ee4a5-b9a5-48fe-863c-b668c87c384a" containerID="545ca0dded484f0be0e5baf82d7730cea308935c228c45dbc72f6ebaadcea358" exitCode=0 Jan 31 04:05:36 crc kubenswrapper[4667]: I0131 04:05:36.446717 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-85mm4" event={"ID":"343ee4a5-b9a5-48fe-863c-b668c87c384a","Type":"ContainerDied","Data":"545ca0dded484f0be0e5baf82d7730cea308935c228c45dbc72f6ebaadcea358"} Jan 31 04:05:36 crc kubenswrapper[4667]: I0131 04:05:36.446876 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-85mm4" event={"ID":"343ee4a5-b9a5-48fe-863c-b668c87c384a","Type":"ContainerStarted","Data":"a0046c219c821f1759a096569a00db19f6d0b4f69fef0aef95c8d4302154ba97"} Jan 31 04:05:36 crc kubenswrapper[4667]: I0131 04:05:36.452450 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjk58\" (UniqueName: \"kubernetes.io/projected/eb505db2-7884-475e-9bed-884650bfaeb8-kube-api-access-gjk58\") pod \"dnsmasq-dns-698758b865-gdsvg\" (UID: \"eb505db2-7884-475e-9bed-884650bfaeb8\") " pod="openstack/dnsmasq-dns-698758b865-gdsvg" Jan 31 04:05:36 crc kubenswrapper[4667]: I0131 04:05:36.571082 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-gdsvg" Jan 31 04:05:37 crc kubenswrapper[4667]: I0131 04:05:37.092176 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-gdsvg"] Jan 31 04:05:37 crc kubenswrapper[4667]: I0131 04:05:37.382677 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 31 04:05:37 crc kubenswrapper[4667]: I0131 04:05:37.388032 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 31 04:05:37 crc kubenswrapper[4667]: I0131 04:05:37.393541 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 31 04:05:37 crc kubenswrapper[4667]: I0131 04:05:37.393940 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 31 04:05:37 crc kubenswrapper[4667]: I0131 04:05:37.396503 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 31 04:05:37 crc kubenswrapper[4667]: I0131 04:05:37.396679 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-lfvb8" Jan 31 04:05:37 crc kubenswrapper[4667]: I0131 04:05:37.430649 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 31 04:05:37 crc kubenswrapper[4667]: I0131 04:05:37.436503 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/49dfb349-068e-4574-9e26-3d413295d983-lock\") pod \"swift-storage-0\" (UID: \"49dfb349-068e-4574-9e26-3d413295d983\") " pod="openstack/swift-storage-0" Jan 31 04:05:37 crc kubenswrapper[4667]: I0131 04:05:37.436718 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/49dfb349-068e-4574-9e26-3d413295d983-cache\") pod \"swift-storage-0\" (UID: \"49dfb349-068e-4574-9e26-3d413295d983\") " pod="openstack/swift-storage-0" Jan 31 04:05:37 crc kubenswrapper[4667]: I0131 04:05:37.436823 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/49dfb349-068e-4574-9e26-3d413295d983-etc-swift\") pod \"swift-storage-0\" (UID: \"49dfb349-068e-4574-9e26-3d413295d983\") " pod="openstack/swift-storage-0" Jan 31 04:05:37 crc kubenswrapper[4667]: I0131 04:05:37.436927 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q4xg\" (UniqueName: \"kubernetes.io/projected/49dfb349-068e-4574-9e26-3d413295d983-kube-api-access-7q4xg\") pod \"swift-storage-0\" (UID: \"49dfb349-068e-4574-9e26-3d413295d983\") " pod="openstack/swift-storage-0" Jan 31 04:05:37 crc kubenswrapper[4667]: I0131 04:05:37.437186 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"49dfb349-068e-4574-9e26-3d413295d983\") " pod="openstack/swift-storage-0" Jan 31 04:05:37 crc kubenswrapper[4667]: I0131 04:05:37.437267 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49dfb349-068e-4574-9e26-3d413295d983-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"49dfb349-068e-4574-9e26-3d413295d983\") " pod="openstack/swift-storage-0" Jan 31 04:05:37 crc kubenswrapper[4667]: I0131 04:05:37.454025 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-gdsvg" event={"ID":"eb505db2-7884-475e-9bed-884650bfaeb8","Type":"ContainerStarted","Data":"42ab997c8a1966ee6ddda509f79a829a6048e746fcdd232cb1a32a65229d8a92"} Jan 31 04:05:37 crc kubenswrapper[4667]: I0131 04:05:37.456163 4667 generic.go:334] "Generic (PLEG): container finished" podID="342ccb2d-5b9e-433a-a8f8-9d074ee0887f" containerID="4907234e154b4af8f72d149a7fa846ab93ea3ae1bfe9f148e135a9abfc1476ec" exitCode=0 Jan 31 04:05:37 crc kubenswrapper[4667]: I0131 04:05:37.456349 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-95c8-account-create-update-kdk2t" event={"ID":"342ccb2d-5b9e-433a-a8f8-9d074ee0887f","Type":"ContainerDied","Data":"4907234e154b4af8f72d149a7fa846ab93ea3ae1bfe9f148e135a9abfc1476ec"} Jan 31 04:05:37 crc kubenswrapper[4667]: I0131 04:05:37.544081 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/49dfb349-068e-4574-9e26-3d413295d983-etc-swift\") pod \"swift-storage-0\" (UID: \"49dfb349-068e-4574-9e26-3d413295d983\") " pod="openstack/swift-storage-0" Jan 31 04:05:37 crc kubenswrapper[4667]: I0131 04:05:37.544146 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q4xg\" (UniqueName: \"kubernetes.io/projected/49dfb349-068e-4574-9e26-3d413295d983-kube-api-access-7q4xg\") pod \"swift-storage-0\" (UID: \"49dfb349-068e-4574-9e26-3d413295d983\") " pod="openstack/swift-storage-0" Jan 31 04:05:37 crc kubenswrapper[4667]: I0131 04:05:37.544184 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"49dfb349-068e-4574-9e26-3d413295d983\") " pod="openstack/swift-storage-0" Jan 31 04:05:37 crc kubenswrapper[4667]: I0131 04:05:37.544205 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49dfb349-068e-4574-9e26-3d413295d983-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"49dfb349-068e-4574-9e26-3d413295d983\") " pod="openstack/swift-storage-0" Jan 31 04:05:37 crc kubenswrapper[4667]: I0131 04:05:37.544283 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/49dfb349-068e-4574-9e26-3d413295d983-lock\") pod \"swift-storage-0\" (UID: \"49dfb349-068e-4574-9e26-3d413295d983\") " pod="openstack/swift-storage-0" Jan 31 04:05:37 crc kubenswrapper[4667]: I0131 04:05:37.544369 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/49dfb349-068e-4574-9e26-3d413295d983-cache\") pod \"swift-storage-0\" (UID: \"49dfb349-068e-4574-9e26-3d413295d983\") " pod="openstack/swift-storage-0" Jan 31 04:05:37 crc kubenswrapper[4667]: I0131 04:05:37.544905 4667 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"49dfb349-068e-4574-9e26-3d413295d983\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/swift-storage-0" Jan 31 04:05:37 crc kubenswrapper[4667]: I0131 04:05:37.545065 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/49dfb349-068e-4574-9e26-3d413295d983-cache\") pod \"swift-storage-0\" (UID: \"49dfb349-068e-4574-9e26-3d413295d983\") " pod="openstack/swift-storage-0" Jan 31 04:05:37 crc kubenswrapper[4667]: E0131 04:05:37.548745 4667 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 31 04:05:37 crc kubenswrapper[4667]: E0131 04:05:37.548783 4667 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 31 04:05:37 crc kubenswrapper[4667]: E0131 04:05:37.548854 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/49dfb349-068e-4574-9e26-3d413295d983-etc-swift podName:49dfb349-068e-4574-9e26-3d413295d983 nodeName:}" failed. No retries permitted until 2026-01-31 04:05:38.048815392 +0000 UTC m=+1061.565150691 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/49dfb349-068e-4574-9e26-3d413295d983-etc-swift") pod "swift-storage-0" (UID: "49dfb349-068e-4574-9e26-3d413295d983") : configmap "swift-ring-files" not found Jan 31 04:05:37 crc kubenswrapper[4667]: I0131 04:05:37.553960 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/49dfb349-068e-4574-9e26-3d413295d983-lock\") pod \"swift-storage-0\" (UID: \"49dfb349-068e-4574-9e26-3d413295d983\") " pod="openstack/swift-storage-0" Jan 31 04:05:37 crc kubenswrapper[4667]: I0131 04:05:37.595054 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"49dfb349-068e-4574-9e26-3d413295d983\") " pod="openstack/swift-storage-0" Jan 31 04:05:37 crc kubenswrapper[4667]: I0131 04:05:37.595504 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49dfb349-068e-4574-9e26-3d413295d983-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"49dfb349-068e-4574-9e26-3d413295d983\") " pod="openstack/swift-storage-0" Jan 31 04:05:37 crc kubenswrapper[4667]: I0131 04:05:37.618166 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q4xg\" (UniqueName: \"kubernetes.io/projected/49dfb349-068e-4574-9e26-3d413295d983-kube-api-access-7q4xg\") pod \"swift-storage-0\" (UID: \"49dfb349-068e-4574-9e26-3d413295d983\") " pod="openstack/swift-storage-0" Jan 31 04:05:37 crc kubenswrapper[4667]: I0131 04:05:37.984751 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-fpj9r"] Jan 31 04:05:37 crc kubenswrapper[4667]: I0131 04:05:37.986072 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-fpj9r" Jan 31 04:05:37 crc kubenswrapper[4667]: I0131 04:05:37.988528 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 31 04:05:37 crc kubenswrapper[4667]: I0131 04:05:37.988832 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 31 04:05:37 crc kubenswrapper[4667]: I0131 04:05:37.996494 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.003571 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-fpj9r"] Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.049030 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b82f-account-create-update-df9j2" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.068349 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65cc9566-177a-41b5-b00c-83290fa14641-scripts\") pod \"swift-ring-rebalance-fpj9r\" (UID: \"65cc9566-177a-41b5-b00c-83290fa14641\") " pod="openstack/swift-ring-rebalance-fpj9r" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.068436 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlvzl\" (UniqueName: \"kubernetes.io/projected/65cc9566-177a-41b5-b00c-83290fa14641-kube-api-access-hlvzl\") pod \"swift-ring-rebalance-fpj9r\" (UID: \"65cc9566-177a-41b5-b00c-83290fa14641\") " pod="openstack/swift-ring-rebalance-fpj9r" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.068467 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/65cc9566-177a-41b5-b00c-83290fa14641-swiftconf\") pod \"swift-ring-rebalance-fpj9r\" (UID: \"65cc9566-177a-41b5-b00c-83290fa14641\") " pod="openstack/swift-ring-rebalance-fpj9r" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.068513 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/65cc9566-177a-41b5-b00c-83290fa14641-etc-swift\") pod \"swift-ring-rebalance-fpj9r\" (UID: \"65cc9566-177a-41b5-b00c-83290fa14641\") " pod="openstack/swift-ring-rebalance-fpj9r" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.068644 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65cc9566-177a-41b5-b00c-83290fa14641-combined-ca-bundle\") pod \"swift-ring-rebalance-fpj9r\" (UID: \"65cc9566-177a-41b5-b00c-83290fa14641\") " pod="openstack/swift-ring-rebalance-fpj9r" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.068671 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/65cc9566-177a-41b5-b00c-83290fa14641-ring-data-devices\") pod \"swift-ring-rebalance-fpj9r\" (UID: \"65cc9566-177a-41b5-b00c-83290fa14641\") " pod="openstack/swift-ring-rebalance-fpj9r" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.068727 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/49dfb349-068e-4574-9e26-3d413295d983-etc-swift\") pod \"swift-storage-0\" (UID: \"49dfb349-068e-4574-9e26-3d413295d983\") " pod="openstack/swift-storage-0" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.068749 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/65cc9566-177a-41b5-b00c-83290fa14641-dispersionconf\") pod \"swift-ring-rebalance-fpj9r\" (UID: \"65cc9566-177a-41b5-b00c-83290fa14641\") " pod="openstack/swift-ring-rebalance-fpj9r" Jan 31 04:05:38 crc kubenswrapper[4667]: E0131 04:05:38.068988 4667 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 31 04:05:38 crc kubenswrapper[4667]: E0131 04:05:38.069001 4667 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 31 04:05:38 crc kubenswrapper[4667]: E0131 04:05:38.069039 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/49dfb349-068e-4574-9e26-3d413295d983-etc-swift podName:49dfb349-068e-4574-9e26-3d413295d983 nodeName:}" failed. No retries permitted until 2026-01-31 04:05:39.069027001 +0000 UTC m=+1062.585362300 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/49dfb349-068e-4574-9e26-3d413295d983-etc-swift") pod "swift-storage-0" (UID: "49dfb349-068e-4574-9e26-3d413295d983") : configmap "swift-ring-files" not found Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.170012 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95e02c93-990d-43de-b11d-db36bc7524a6-operator-scripts\") pod \"95e02c93-990d-43de-b11d-db36bc7524a6\" (UID: \"95e02c93-990d-43de-b11d-db36bc7524a6\") " Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.170128 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ml9cx\" (UniqueName: \"kubernetes.io/projected/95e02c93-990d-43de-b11d-db36bc7524a6-kube-api-access-ml9cx\") pod \"95e02c93-990d-43de-b11d-db36bc7524a6\" (UID: \"95e02c93-990d-43de-b11d-db36bc7524a6\") " Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.170638 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/65cc9566-177a-41b5-b00c-83290fa14641-etc-swift\") pod \"swift-ring-rebalance-fpj9r\" (UID: \"65cc9566-177a-41b5-b00c-83290fa14641\") " pod="openstack/swift-ring-rebalance-fpj9r" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.170688 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65cc9566-177a-41b5-b00c-83290fa14641-combined-ca-bundle\") pod \"swift-ring-rebalance-fpj9r\" (UID: \"65cc9566-177a-41b5-b00c-83290fa14641\") " pod="openstack/swift-ring-rebalance-fpj9r" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.170714 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/65cc9566-177a-41b5-b00c-83290fa14641-ring-data-devices\") pod \"swift-ring-rebalance-fpj9r\" (UID: \"65cc9566-177a-41b5-b00c-83290fa14641\") " pod="openstack/swift-ring-rebalance-fpj9r" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.170762 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/65cc9566-177a-41b5-b00c-83290fa14641-dispersionconf\") pod \"swift-ring-rebalance-fpj9r\" (UID: \"65cc9566-177a-41b5-b00c-83290fa14641\") " pod="openstack/swift-ring-rebalance-fpj9r" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.170810 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65cc9566-177a-41b5-b00c-83290fa14641-scripts\") pod \"swift-ring-rebalance-fpj9r\" (UID: \"65cc9566-177a-41b5-b00c-83290fa14641\") " pod="openstack/swift-ring-rebalance-fpj9r" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.170870 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlvzl\" (UniqueName: \"kubernetes.io/projected/65cc9566-177a-41b5-b00c-83290fa14641-kube-api-access-hlvzl\") pod \"swift-ring-rebalance-fpj9r\" (UID: \"65cc9566-177a-41b5-b00c-83290fa14641\") " pod="openstack/swift-ring-rebalance-fpj9r" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.170902 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/65cc9566-177a-41b5-b00c-83290fa14641-swiftconf\") pod \"swift-ring-rebalance-fpj9r\" (UID: \"65cc9566-177a-41b5-b00c-83290fa14641\") " pod="openstack/swift-ring-rebalance-fpj9r" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.177166 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95e02c93-990d-43de-b11d-db36bc7524a6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "95e02c93-990d-43de-b11d-db36bc7524a6" (UID: "95e02c93-990d-43de-b11d-db36bc7524a6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.177764 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/65cc9566-177a-41b5-b00c-83290fa14641-etc-swift\") pod \"swift-ring-rebalance-fpj9r\" (UID: \"65cc9566-177a-41b5-b00c-83290fa14641\") " pod="openstack/swift-ring-rebalance-fpj9r" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.180056 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/65cc9566-177a-41b5-b00c-83290fa14641-ring-data-devices\") pod \"swift-ring-rebalance-fpj9r\" (UID: \"65cc9566-177a-41b5-b00c-83290fa14641\") " pod="openstack/swift-ring-rebalance-fpj9r" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.201802 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65cc9566-177a-41b5-b00c-83290fa14641-scripts\") pod \"swift-ring-rebalance-fpj9r\" (UID: \"65cc9566-177a-41b5-b00c-83290fa14641\") " pod="openstack/swift-ring-rebalance-fpj9r" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.202488 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/65cc9566-177a-41b5-b00c-83290fa14641-dispersionconf\") pod \"swift-ring-rebalance-fpj9r\" (UID: \"65cc9566-177a-41b5-b00c-83290fa14641\") " pod="openstack/swift-ring-rebalance-fpj9r" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.206437 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/65cc9566-177a-41b5-b00c-83290fa14641-swiftconf\") pod \"swift-ring-rebalance-fpj9r\" (UID: \"65cc9566-177a-41b5-b00c-83290fa14641\") " pod="openstack/swift-ring-rebalance-fpj9r" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.207059 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65cc9566-177a-41b5-b00c-83290fa14641-combined-ca-bundle\") pod \"swift-ring-rebalance-fpj9r\" (UID: \"65cc9566-177a-41b5-b00c-83290fa14641\") " pod="openstack/swift-ring-rebalance-fpj9r" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.210326 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95e02c93-990d-43de-b11d-db36bc7524a6-kube-api-access-ml9cx" (OuterVolumeSpecName: "kube-api-access-ml9cx") pod "95e02c93-990d-43de-b11d-db36bc7524a6" (UID: "95e02c93-990d-43de-b11d-db36bc7524a6"). InnerVolumeSpecName "kube-api-access-ml9cx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.221450 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlvzl\" (UniqueName: \"kubernetes.io/projected/65cc9566-177a-41b5-b00c-83290fa14641-kube-api-access-hlvzl\") pod \"swift-ring-rebalance-fpj9r\" (UID: \"65cc9566-177a-41b5-b00c-83290fa14641\") " pod="openstack/swift-ring-rebalance-fpj9r" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.272541 4667 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95e02c93-990d-43de-b11d-db36bc7524a6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.272579 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ml9cx\" (UniqueName: \"kubernetes.io/projected/95e02c93-990d-43de-b11d-db36bc7524a6-kube-api-access-ml9cx\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.306915 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-fpj9r" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.390259 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-85mm4" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.474306 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8891-account-create-update-b6rdx" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.475089 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cwd7k" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.475517 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-85mm4" event={"ID":"343ee4a5-b9a5-48fe-863c-b668c87c384a","Type":"ContainerDied","Data":"a0046c219c821f1759a096569a00db19f6d0b4f69fef0aef95c8d4302154ba97"} Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.475551 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0046c219c821f1759a096569a00db19f6d0b4f69fef0aef95c8d4302154ba97" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.475594 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-85mm4" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.481491 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlhmq\" (UniqueName: \"kubernetes.io/projected/343ee4a5-b9a5-48fe-863c-b668c87c384a-kube-api-access-mlhmq\") pod \"343ee4a5-b9a5-48fe-863c-b668c87c384a\" (UID: \"343ee4a5-b9a5-48fe-863c-b668c87c384a\") " Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.481674 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/343ee4a5-b9a5-48fe-863c-b668c87c384a-operator-scripts\") pod \"343ee4a5-b9a5-48fe-863c-b668c87c384a\" (UID: \"343ee4a5-b9a5-48fe-863c-b668c87c384a\") " Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.482436 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/343ee4a5-b9a5-48fe-863c-b668c87c384a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "343ee4a5-b9a5-48fe-863c-b668c87c384a" (UID: "343ee4a5-b9a5-48fe-863c-b668c87c384a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.503315 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/343ee4a5-b9a5-48fe-863c-b668c87c384a-kube-api-access-mlhmq" (OuterVolumeSpecName: "kube-api-access-mlhmq") pod "343ee4a5-b9a5-48fe-863c-b668c87c384a" (UID: "343ee4a5-b9a5-48fe-863c-b668c87c384a"). InnerVolumeSpecName "kube-api-access-mlhmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.533449 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-cwd7k" event={"ID":"4a118818-9c6a-4477-8a09-84e63dd51c45","Type":"ContainerDied","Data":"87d364e8a7f2ccfcc1fb9a65bf317839bf23cab11921ce3bf1554ceec300fe18"} Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.533494 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87d364e8a7f2ccfcc1fb9a65bf317839bf23cab11921ce3bf1554ceec300fe18" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.533576 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cwd7k" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.561412 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b82f-account-create-update-df9j2" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.561769 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b82f-account-create-update-df9j2" event={"ID":"95e02c93-990d-43de-b11d-db36bc7524a6","Type":"ContainerDied","Data":"d6813a111e2ebc3cf85ffb6535d75f278b9ad8dcf26cba3265274775b6494d32"} Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.562158 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6813a111e2ebc3cf85ffb6535d75f278b9ad8dcf26cba3265274775b6494d32" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.566579 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8891-account-create-update-b6rdx" event={"ID":"7744d973-bda8-482c-8c36-d3e9e7a484a4","Type":"ContainerDied","Data":"32304162887edbf64eddebfffd4a814bca250537f409346f51d06e9513404a9c"} Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.566677 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32304162887edbf64eddebfffd4a814bca250537f409346f51d06e9513404a9c" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.566772 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8891-account-create-update-b6rdx" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.591133 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxxht\" (UniqueName: \"kubernetes.io/projected/4a118818-9c6a-4477-8a09-84e63dd51c45-kube-api-access-mxxht\") pod \"4a118818-9c6a-4477-8a09-84e63dd51c45\" (UID: \"4a118818-9c6a-4477-8a09-84e63dd51c45\") " Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.591369 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7744d973-bda8-482c-8c36-d3e9e7a484a4-operator-scripts\") pod \"7744d973-bda8-482c-8c36-d3e9e7a484a4\" (UID: \"7744d973-bda8-482c-8c36-d3e9e7a484a4\") " Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.591478 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a118818-9c6a-4477-8a09-84e63dd51c45-operator-scripts\") pod \"4a118818-9c6a-4477-8a09-84e63dd51c45\" (UID: \"4a118818-9c6a-4477-8a09-84e63dd51c45\") " Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.591582 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbmmr\" (UniqueName: \"kubernetes.io/projected/7744d973-bda8-482c-8c36-d3e9e7a484a4-kube-api-access-xbmmr\") pod \"7744d973-bda8-482c-8c36-d3e9e7a484a4\" (UID: \"7744d973-bda8-482c-8c36-d3e9e7a484a4\") " Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.592124 4667 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/343ee4a5-b9a5-48fe-863c-b668c87c384a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.592143 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlhmq\" (UniqueName: \"kubernetes.io/projected/343ee4a5-b9a5-48fe-863c-b668c87c384a-kube-api-access-mlhmq\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.594286 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a118818-9c6a-4477-8a09-84e63dd51c45-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4a118818-9c6a-4477-8a09-84e63dd51c45" (UID: "4a118818-9c6a-4477-8a09-84e63dd51c45"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.595356 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7744d973-bda8-482c-8c36-d3e9e7a484a4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7744d973-bda8-482c-8c36-d3e9e7a484a4" (UID: "7744d973-bda8-482c-8c36-d3e9e7a484a4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.599782 4667 generic.go:334] "Generic (PLEG): container finished" podID="eb505db2-7884-475e-9bed-884650bfaeb8" containerID="4d655e58221932b0880d695fdce67a072fff6ed212e41e7e646ca0f10cdcccc0" exitCode=0 Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.600128 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-gdsvg" event={"ID":"eb505db2-7884-475e-9bed-884650bfaeb8","Type":"ContainerDied","Data":"4d655e58221932b0880d695fdce67a072fff6ed212e41e7e646ca0f10cdcccc0"} Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.617267 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7744d973-bda8-482c-8c36-d3e9e7a484a4-kube-api-access-xbmmr" (OuterVolumeSpecName: "kube-api-access-xbmmr") pod "7744d973-bda8-482c-8c36-d3e9e7a484a4" (UID: "7744d973-bda8-482c-8c36-d3e9e7a484a4"). InnerVolumeSpecName "kube-api-access-xbmmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.617329 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a118818-9c6a-4477-8a09-84e63dd51c45-kube-api-access-mxxht" (OuterVolumeSpecName: "kube-api-access-mxxht") pod "4a118818-9c6a-4477-8a09-84e63dd51c45" (UID: "4a118818-9c6a-4477-8a09-84e63dd51c45"). InnerVolumeSpecName "kube-api-access-mxxht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.695269 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxxht\" (UniqueName: \"kubernetes.io/projected/4a118818-9c6a-4477-8a09-84e63dd51c45-kube-api-access-mxxht\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.695318 4667 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7744d973-bda8-482c-8c36-d3e9e7a484a4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.695333 4667 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a118818-9c6a-4477-8a09-84e63dd51c45-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:38 crc kubenswrapper[4667]: I0131 04:05:38.695346 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbmmr\" (UniqueName: \"kubernetes.io/projected/7744d973-bda8-482c-8c36-d3e9e7a484a4-kube-api-access-xbmmr\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:39 crc kubenswrapper[4667]: I0131 04:05:39.019387 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-fpj9r"] Jan 31 04:05:39 crc kubenswrapper[4667]: I0131 04:05:39.115115 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/49dfb349-068e-4574-9e26-3d413295d983-etc-swift\") pod \"swift-storage-0\" (UID: \"49dfb349-068e-4574-9e26-3d413295d983\") " pod="openstack/swift-storage-0" Jan 31 04:05:39 crc kubenswrapper[4667]: E0131 04:05:39.115295 4667 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 31 04:05:39 crc kubenswrapper[4667]: E0131 04:05:39.115319 4667 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 31 04:05:39 crc kubenswrapper[4667]: E0131 04:05:39.115362 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/49dfb349-068e-4574-9e26-3d413295d983-etc-swift podName:49dfb349-068e-4574-9e26-3d413295d983 nodeName:}" failed. No retries permitted until 2026-01-31 04:05:41.115346325 +0000 UTC m=+1064.631681624 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/49dfb349-068e-4574-9e26-3d413295d983-etc-swift") pod "swift-storage-0" (UID: "49dfb349-068e-4574-9e26-3d413295d983") : configmap "swift-ring-files" not found Jan 31 04:05:39 crc kubenswrapper[4667]: I0131 04:05:39.184808 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-95c8-account-create-update-kdk2t" Jan 31 04:05:39 crc kubenswrapper[4667]: I0131 04:05:39.319793 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/342ccb2d-5b9e-433a-a8f8-9d074ee0887f-operator-scripts\") pod \"342ccb2d-5b9e-433a-a8f8-9d074ee0887f\" (UID: \"342ccb2d-5b9e-433a-a8f8-9d074ee0887f\") " Jan 31 04:05:39 crc kubenswrapper[4667]: I0131 04:05:39.320079 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w522j\" (UniqueName: \"kubernetes.io/projected/342ccb2d-5b9e-433a-a8f8-9d074ee0887f-kube-api-access-w522j\") pod \"342ccb2d-5b9e-433a-a8f8-9d074ee0887f\" (UID: \"342ccb2d-5b9e-433a-a8f8-9d074ee0887f\") " Jan 31 04:05:39 crc kubenswrapper[4667]: I0131 04:05:39.320828 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/342ccb2d-5b9e-433a-a8f8-9d074ee0887f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "342ccb2d-5b9e-433a-a8f8-9d074ee0887f" (UID: "342ccb2d-5b9e-433a-a8f8-9d074ee0887f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:05:39 crc kubenswrapper[4667]: I0131 04:05:39.327203 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/342ccb2d-5b9e-433a-a8f8-9d074ee0887f-kube-api-access-w522j" (OuterVolumeSpecName: "kube-api-access-w522j") pod "342ccb2d-5b9e-433a-a8f8-9d074ee0887f" (UID: "342ccb2d-5b9e-433a-a8f8-9d074ee0887f"). InnerVolumeSpecName "kube-api-access-w522j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:05:39 crc kubenswrapper[4667]: I0131 04:05:39.423397 4667 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/342ccb2d-5b9e-433a-a8f8-9d074ee0887f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:39 crc kubenswrapper[4667]: I0131 04:05:39.424345 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w522j\" (UniqueName: \"kubernetes.io/projected/342ccb2d-5b9e-433a-a8f8-9d074ee0887f-kube-api-access-w522j\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:39 crc kubenswrapper[4667]: I0131 04:05:39.612550 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-gdsvg" event={"ID":"eb505db2-7884-475e-9bed-884650bfaeb8","Type":"ContainerStarted","Data":"b04549fa0f73592bcbd1e3591ef899631912f897fb9e301b70a85041e6b82237"} Jan 31 04:05:39 crc kubenswrapper[4667]: I0131 04:05:39.613134 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-gdsvg" Jan 31 04:05:39 crc kubenswrapper[4667]: I0131 04:05:39.614756 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-fpj9r" event={"ID":"65cc9566-177a-41b5-b00c-83290fa14641","Type":"ContainerStarted","Data":"e111a6761b9165ce2e4183b0cb39c64ed3e7beb4a9fda04760b2e57bd76f721d"} Jan 31 04:05:39 crc kubenswrapper[4667]: I0131 04:05:39.618795 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-95c8-account-create-update-kdk2t" event={"ID":"342ccb2d-5b9e-433a-a8f8-9d074ee0887f","Type":"ContainerDied","Data":"7bf221bf77f0d9c6e49d00b453dc2cbd0ccc2806e0516db1c870635fcd0904c6"} Jan 31 04:05:39 crc kubenswrapper[4667]: I0131 04:05:39.618879 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-95c8-account-create-update-kdk2t" Jan 31 04:05:39 crc kubenswrapper[4667]: I0131 04:05:39.618902 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bf221bf77f0d9c6e49d00b453dc2cbd0ccc2806e0516db1c870635fcd0904c6" Jan 31 04:05:40 crc kubenswrapper[4667]: I0131 04:05:40.234866 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-gdsvg" podStartSLOduration=4.234824925 podStartE2EDuration="4.234824925s" podCreationTimestamp="2026-01-31 04:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:05:39.638405459 +0000 UTC m=+1063.154740758" watchObservedRunningTime="2026-01-31 04:05:40.234824925 +0000 UTC m=+1063.751160224" Jan 31 04:05:41 crc kubenswrapper[4667]: I0131 04:05:41.162247 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/49dfb349-068e-4574-9e26-3d413295d983-etc-swift\") pod \"swift-storage-0\" (UID: \"49dfb349-068e-4574-9e26-3d413295d983\") " pod="openstack/swift-storage-0" Jan 31 04:05:41 crc kubenswrapper[4667]: E0131 04:05:41.162444 4667 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 31 04:05:41 crc kubenswrapper[4667]: E0131 04:05:41.162795 4667 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 31 04:05:41 crc kubenswrapper[4667]: E0131 04:05:41.162874 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/49dfb349-068e-4574-9e26-3d413295d983-etc-swift podName:49dfb349-068e-4574-9e26-3d413295d983 nodeName:}" failed. No retries permitted until 2026-01-31 04:05:45.162851848 +0000 UTC m=+1068.679187147 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/49dfb349-068e-4574-9e26-3d413295d983-etc-swift") pod "swift-storage-0" (UID: "49dfb349-068e-4574-9e26-3d413295d983") : configmap "swift-ring-files" not found Jan 31 04:05:41 crc kubenswrapper[4667]: I0131 04:05:41.173618 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-p55cz"] Jan 31 04:05:41 crc kubenswrapper[4667]: E0131 04:05:41.174312 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="342ccb2d-5b9e-433a-a8f8-9d074ee0887f" containerName="mariadb-account-create-update" Jan 31 04:05:41 crc kubenswrapper[4667]: I0131 04:05:41.174353 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="342ccb2d-5b9e-433a-a8f8-9d074ee0887f" containerName="mariadb-account-create-update" Jan 31 04:05:41 crc kubenswrapper[4667]: E0131 04:05:41.174381 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="343ee4a5-b9a5-48fe-863c-b668c87c384a" containerName="mariadb-database-create" Jan 31 04:05:41 crc kubenswrapper[4667]: I0131 04:05:41.174395 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="343ee4a5-b9a5-48fe-863c-b668c87c384a" containerName="mariadb-database-create" Jan 31 04:05:41 crc kubenswrapper[4667]: E0131 04:05:41.174441 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7744d973-bda8-482c-8c36-d3e9e7a484a4" containerName="mariadb-account-create-update" Jan 31 04:05:41 crc kubenswrapper[4667]: I0131 04:05:41.174456 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="7744d973-bda8-482c-8c36-d3e9e7a484a4" containerName="mariadb-account-create-update" Jan 31 04:05:41 crc kubenswrapper[4667]: E0131 04:05:41.174525 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a118818-9c6a-4477-8a09-84e63dd51c45" containerName="mariadb-database-create" Jan 31 04:05:41 crc kubenswrapper[4667]: I0131 04:05:41.174538 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a118818-9c6a-4477-8a09-84e63dd51c45" containerName="mariadb-database-create" Jan 31 04:05:41 crc kubenswrapper[4667]: E0131 04:05:41.174559 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95e02c93-990d-43de-b11d-db36bc7524a6" containerName="mariadb-account-create-update" Jan 31 04:05:41 crc kubenswrapper[4667]: I0131 04:05:41.174570 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="95e02c93-990d-43de-b11d-db36bc7524a6" containerName="mariadb-account-create-update" Jan 31 04:05:41 crc kubenswrapper[4667]: I0131 04:05:41.174802 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="7744d973-bda8-482c-8c36-d3e9e7a484a4" containerName="mariadb-account-create-update" Jan 31 04:05:41 crc kubenswrapper[4667]: I0131 04:05:41.174826 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="342ccb2d-5b9e-433a-a8f8-9d074ee0887f" containerName="mariadb-account-create-update" Jan 31 04:05:41 crc kubenswrapper[4667]: I0131 04:05:41.174870 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a118818-9c6a-4477-8a09-84e63dd51c45" containerName="mariadb-database-create" Jan 31 04:05:41 crc kubenswrapper[4667]: I0131 04:05:41.174901 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="95e02c93-990d-43de-b11d-db36bc7524a6" containerName="mariadb-account-create-update" Jan 31 04:05:41 crc kubenswrapper[4667]: I0131 04:05:41.174919 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="343ee4a5-b9a5-48fe-863c-b668c87c384a" containerName="mariadb-database-create" Jan 31 04:05:41 crc kubenswrapper[4667]: I0131 04:05:41.176436 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p55cz" Jan 31 04:05:41 crc kubenswrapper[4667]: I0131 04:05:41.179480 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 31 04:05:41 crc kubenswrapper[4667]: I0131 04:05:41.189280 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-p55cz"] Jan 31 04:05:41 crc kubenswrapper[4667]: I0131 04:05:41.268251 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b251653c-b193-413c-a483-57c4288ae3c4-operator-scripts\") pod \"root-account-create-update-p55cz\" (UID: \"b251653c-b193-413c-a483-57c4288ae3c4\") " pod="openstack/root-account-create-update-p55cz" Jan 31 04:05:41 crc kubenswrapper[4667]: I0131 04:05:41.268318 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqwvq\" (UniqueName: \"kubernetes.io/projected/b251653c-b193-413c-a483-57c4288ae3c4-kube-api-access-pqwvq\") pod \"root-account-create-update-p55cz\" (UID: \"b251653c-b193-413c-a483-57c4288ae3c4\") " pod="openstack/root-account-create-update-p55cz" Jan 31 04:05:41 crc kubenswrapper[4667]: I0131 04:05:41.370872 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b251653c-b193-413c-a483-57c4288ae3c4-operator-scripts\") pod \"root-account-create-update-p55cz\" (UID: \"b251653c-b193-413c-a483-57c4288ae3c4\") " pod="openstack/root-account-create-update-p55cz" Jan 31 04:05:41 crc kubenswrapper[4667]: I0131 04:05:41.371153 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqwvq\" (UniqueName: \"kubernetes.io/projected/b251653c-b193-413c-a483-57c4288ae3c4-kube-api-access-pqwvq\") pod \"root-account-create-update-p55cz\" (UID: \"b251653c-b193-413c-a483-57c4288ae3c4\") " pod="openstack/root-account-create-update-p55cz" Jan 31 04:05:41 crc kubenswrapper[4667]: I0131 04:05:41.372013 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b251653c-b193-413c-a483-57c4288ae3c4-operator-scripts\") pod \"root-account-create-update-p55cz\" (UID: \"b251653c-b193-413c-a483-57c4288ae3c4\") " pod="openstack/root-account-create-update-p55cz" Jan 31 04:05:41 crc kubenswrapper[4667]: I0131 04:05:41.401984 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqwvq\" (UniqueName: \"kubernetes.io/projected/b251653c-b193-413c-a483-57c4288ae3c4-kube-api-access-pqwvq\") pod \"root-account-create-update-p55cz\" (UID: \"b251653c-b193-413c-a483-57c4288ae3c4\") " pod="openstack/root-account-create-update-p55cz" Jan 31 04:05:41 crc kubenswrapper[4667]: I0131 04:05:41.498432 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p55cz" Jan 31 04:05:43 crc kubenswrapper[4667]: I0131 04:05:43.590018 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-4flkf"] Jan 31 04:05:43 crc kubenswrapper[4667]: I0131 04:05:43.591789 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4flkf" Jan 31 04:05:43 crc kubenswrapper[4667]: I0131 04:05:43.608999 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-4flkf"] Jan 31 04:05:43 crc kubenswrapper[4667]: I0131 04:05:43.722579 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a74e72f1-41d7-476c-abc1-00c32bfb03d8-operator-scripts\") pod \"keystone-db-create-4flkf\" (UID: \"a74e72f1-41d7-476c-abc1-00c32bfb03d8\") " pod="openstack/keystone-db-create-4flkf" Jan 31 04:05:43 crc kubenswrapper[4667]: I0131 04:05:43.723127 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsgct\" (UniqueName: \"kubernetes.io/projected/a74e72f1-41d7-476c-abc1-00c32bfb03d8-kube-api-access-gsgct\") pod \"keystone-db-create-4flkf\" (UID: \"a74e72f1-41d7-476c-abc1-00c32bfb03d8\") " pod="openstack/keystone-db-create-4flkf" Jan 31 04:05:43 crc kubenswrapper[4667]: I0131 04:05:43.824765 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsgct\" (UniqueName: \"kubernetes.io/projected/a74e72f1-41d7-476c-abc1-00c32bfb03d8-kube-api-access-gsgct\") pod \"keystone-db-create-4flkf\" (UID: \"a74e72f1-41d7-476c-abc1-00c32bfb03d8\") " pod="openstack/keystone-db-create-4flkf" Jan 31 04:05:43 crc kubenswrapper[4667]: I0131 04:05:43.825149 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a74e72f1-41d7-476c-abc1-00c32bfb03d8-operator-scripts\") pod \"keystone-db-create-4flkf\" (UID: \"a74e72f1-41d7-476c-abc1-00c32bfb03d8\") " pod="openstack/keystone-db-create-4flkf" Jan 31 04:05:43 crc kubenswrapper[4667]: I0131 04:05:43.826131 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a74e72f1-41d7-476c-abc1-00c32bfb03d8-operator-scripts\") pod \"keystone-db-create-4flkf\" (UID: \"a74e72f1-41d7-476c-abc1-00c32bfb03d8\") " pod="openstack/keystone-db-create-4flkf" Jan 31 04:05:43 crc kubenswrapper[4667]: I0131 04:05:43.861915 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsgct\" (UniqueName: \"kubernetes.io/projected/a74e72f1-41d7-476c-abc1-00c32bfb03d8-kube-api-access-gsgct\") pod \"keystone-db-create-4flkf\" (UID: \"a74e72f1-41d7-476c-abc1-00c32bfb03d8\") " pod="openstack/keystone-db-create-4flkf" Jan 31 04:05:43 crc kubenswrapper[4667]: I0131 04:05:43.940241 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4flkf" Jan 31 04:05:44 crc kubenswrapper[4667]: I0131 04:05:44.805990 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-9mmhm"] Jan 31 04:05:44 crc kubenswrapper[4667]: I0131 04:05:44.808129 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9mmhm" Jan 31 04:05:44 crc kubenswrapper[4667]: I0131 04:05:44.817124 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 31 04:05:44 crc kubenswrapper[4667]: I0131 04:05:44.817580 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-8pgdz" Jan 31 04:05:44 crc kubenswrapper[4667]: I0131 04:05:44.837248 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-9mmhm"] Jan 31 04:05:44 crc kubenswrapper[4667]: I0131 04:05:44.945504 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhcl4\" (UniqueName: \"kubernetes.io/projected/959a81ea-7cf7-4fc4-b84d-14699d4e6bb4-kube-api-access-mhcl4\") pod \"glance-db-sync-9mmhm\" (UID: \"959a81ea-7cf7-4fc4-b84d-14699d4e6bb4\") " pod="openstack/glance-db-sync-9mmhm" Jan 31 04:05:44 crc kubenswrapper[4667]: I0131 04:05:44.945590 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/959a81ea-7cf7-4fc4-b84d-14699d4e6bb4-combined-ca-bundle\") pod \"glance-db-sync-9mmhm\" (UID: \"959a81ea-7cf7-4fc4-b84d-14699d4e6bb4\") " pod="openstack/glance-db-sync-9mmhm" Jan 31 04:05:44 crc kubenswrapper[4667]: I0131 04:05:44.945616 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/959a81ea-7cf7-4fc4-b84d-14699d4e6bb4-db-sync-config-data\") pod \"glance-db-sync-9mmhm\" (UID: \"959a81ea-7cf7-4fc4-b84d-14699d4e6bb4\") " pod="openstack/glance-db-sync-9mmhm" Jan 31 04:05:44 crc kubenswrapper[4667]: I0131 04:05:44.945652 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/959a81ea-7cf7-4fc4-b84d-14699d4e6bb4-config-data\") pod \"glance-db-sync-9mmhm\" (UID: \"959a81ea-7cf7-4fc4-b84d-14699d4e6bb4\") " pod="openstack/glance-db-sync-9mmhm" Jan 31 04:05:45 crc kubenswrapper[4667]: I0131 04:05:45.047103 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhcl4\" (UniqueName: \"kubernetes.io/projected/959a81ea-7cf7-4fc4-b84d-14699d4e6bb4-kube-api-access-mhcl4\") pod \"glance-db-sync-9mmhm\" (UID: \"959a81ea-7cf7-4fc4-b84d-14699d4e6bb4\") " pod="openstack/glance-db-sync-9mmhm" Jan 31 04:05:45 crc kubenswrapper[4667]: I0131 04:05:45.047216 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/959a81ea-7cf7-4fc4-b84d-14699d4e6bb4-combined-ca-bundle\") pod \"glance-db-sync-9mmhm\" (UID: \"959a81ea-7cf7-4fc4-b84d-14699d4e6bb4\") " pod="openstack/glance-db-sync-9mmhm" Jan 31 04:05:45 crc kubenswrapper[4667]: I0131 04:05:45.047242 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/959a81ea-7cf7-4fc4-b84d-14699d4e6bb4-db-sync-config-data\") pod \"glance-db-sync-9mmhm\" (UID: \"959a81ea-7cf7-4fc4-b84d-14699d4e6bb4\") " pod="openstack/glance-db-sync-9mmhm" Jan 31 04:05:45 crc kubenswrapper[4667]: I0131 04:05:45.047277 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/959a81ea-7cf7-4fc4-b84d-14699d4e6bb4-config-data\") pod \"glance-db-sync-9mmhm\" (UID: \"959a81ea-7cf7-4fc4-b84d-14699d4e6bb4\") " pod="openstack/glance-db-sync-9mmhm" Jan 31 04:05:45 crc kubenswrapper[4667]: I0131 04:05:45.052989 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/959a81ea-7cf7-4fc4-b84d-14699d4e6bb4-combined-ca-bundle\") pod \"glance-db-sync-9mmhm\" (UID: \"959a81ea-7cf7-4fc4-b84d-14699d4e6bb4\") " pod="openstack/glance-db-sync-9mmhm" Jan 31 04:05:45 crc kubenswrapper[4667]: I0131 04:05:45.057566 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/959a81ea-7cf7-4fc4-b84d-14699d4e6bb4-db-sync-config-data\") pod \"glance-db-sync-9mmhm\" (UID: \"959a81ea-7cf7-4fc4-b84d-14699d4e6bb4\") " pod="openstack/glance-db-sync-9mmhm" Jan 31 04:05:45 crc kubenswrapper[4667]: I0131 04:05:45.065558 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/959a81ea-7cf7-4fc4-b84d-14699d4e6bb4-config-data\") pod \"glance-db-sync-9mmhm\" (UID: \"959a81ea-7cf7-4fc4-b84d-14699d4e6bb4\") " pod="openstack/glance-db-sync-9mmhm" Jan 31 04:05:45 crc kubenswrapper[4667]: I0131 04:05:45.067473 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhcl4\" (UniqueName: \"kubernetes.io/projected/959a81ea-7cf7-4fc4-b84d-14699d4e6bb4-kube-api-access-mhcl4\") pod \"glance-db-sync-9mmhm\" (UID: \"959a81ea-7cf7-4fc4-b84d-14699d4e6bb4\") " pod="openstack/glance-db-sync-9mmhm" Jan 31 04:05:45 crc kubenswrapper[4667]: I0131 04:05:45.138622 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9mmhm" Jan 31 04:05:45 crc kubenswrapper[4667]: I0131 04:05:45.251763 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/49dfb349-068e-4574-9e26-3d413295d983-etc-swift\") pod \"swift-storage-0\" (UID: \"49dfb349-068e-4574-9e26-3d413295d983\") " pod="openstack/swift-storage-0" Jan 31 04:05:45 crc kubenswrapper[4667]: E0131 04:05:45.251976 4667 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 31 04:05:45 crc kubenswrapper[4667]: E0131 04:05:45.252007 4667 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 31 04:05:45 crc kubenswrapper[4667]: E0131 04:05:45.252065 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/49dfb349-068e-4574-9e26-3d413295d983-etc-swift podName:49dfb349-068e-4574-9e26-3d413295d983 nodeName:}" failed. No retries permitted until 2026-01-31 04:05:53.252047581 +0000 UTC m=+1076.768382880 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/49dfb349-068e-4574-9e26-3d413295d983-etc-swift") pod "swift-storage-0" (UID: "49dfb349-068e-4574-9e26-3d413295d983") : configmap "swift-ring-files" not found Jan 31 04:05:45 crc kubenswrapper[4667]: I0131 04:05:45.704301 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:05:45 crc kubenswrapper[4667]: I0131 04:05:45.704388 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:05:45 crc kubenswrapper[4667]: I0131 04:05:45.704441 4667 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" Jan 31 04:05:45 crc kubenswrapper[4667]: I0131 04:05:45.705309 4667 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a2768bea3b08958c54e155e8f29b14218602ccc55cf630ccf4d7736c3b3b12ec"} pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 04:05:45 crc kubenswrapper[4667]: I0131 04:05:45.705371 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" containerID="cri-o://a2768bea3b08958c54e155e8f29b14218602ccc55cf630ccf4d7736c3b3b12ec" gracePeriod=600 Jan 31 04:05:46 crc kubenswrapper[4667]: I0131 04:05:46.117223 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 31 04:05:46 crc kubenswrapper[4667]: I0131 04:05:46.578203 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-gdsvg" Jan 31 04:05:46 crc kubenswrapper[4667]: I0131 04:05:46.750158 4667 generic.go:334] "Generic (PLEG): container finished" podID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerID="a2768bea3b08958c54e155e8f29b14218602ccc55cf630ccf4d7736c3b3b12ec" exitCode=0 Jan 31 04:05:46 crc kubenswrapper[4667]: I0131 04:05:46.750614 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" event={"ID":"b103bbd2-fb5d-4b2a-8b01-c32f699757df","Type":"ContainerDied","Data":"a2768bea3b08958c54e155e8f29b14218602ccc55cf630ccf4d7736c3b3b12ec"} Jan 31 04:05:46 crc kubenswrapper[4667]: I0131 04:05:46.750690 4667 scope.go:117] "RemoveContainer" containerID="1e53b0068c5af26480719e1ae76b8eb2cdae9fcbfa4d0840e77aebecf0501325" Jan 31 04:05:46 crc kubenswrapper[4667]: I0131 04:05:46.821583 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-nh6hz"] Jan 31 04:05:46 crc kubenswrapper[4667]: I0131 04:05:46.830687 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-nh6hz" podUID="5c3f2700-4311-41b0-8b3a-20d1dd1db82f" containerName="dnsmasq-dns" containerID="cri-o://3d55288e1ad6d29fe991ad00ab9924d42e4bba0be58b1cdb8eae289b38942fee" gracePeriod=10 Jan 31 04:05:47 crc kubenswrapper[4667]: I0131 04:05:47.178493 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-p55cz"] Jan 31 04:05:47 crc kubenswrapper[4667]: I0131 04:05:47.424489 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-4flkf"] Jan 31 04:05:47 crc kubenswrapper[4667]: I0131 04:05:47.443038 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-nh6hz" podUID="5c3f2700-4311-41b0-8b3a-20d1dd1db82f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Jan 31 04:05:47 crc kubenswrapper[4667]: I0131 04:05:47.503905 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-9mmhm"] Jan 31 04:05:47 crc kubenswrapper[4667]: W0131 04:05:47.526099 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod959a81ea_7cf7_4fc4_b84d_14699d4e6bb4.slice/crio-16279b83a5c455921feb7d9450a09dc88f20a0fd3db6f0a588f6a3e7cb8fd5bf WatchSource:0}: Error finding container 16279b83a5c455921feb7d9450a09dc88f20a0fd3db6f0a588f6a3e7cb8fd5bf: Status 404 returned error can't find the container with id 16279b83a5c455921feb7d9450a09dc88f20a0fd3db6f0a588f6a3e7cb8fd5bf Jan 31 04:05:47 crc kubenswrapper[4667]: I0131 04:05:47.780207 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9mmhm" event={"ID":"959a81ea-7cf7-4fc4-b84d-14699d4e6bb4","Type":"ContainerStarted","Data":"16279b83a5c455921feb7d9450a09dc88f20a0fd3db6f0a588f6a3e7cb8fd5bf"} Jan 31 04:05:47 crc kubenswrapper[4667]: I0131 04:05:47.790075 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-fpj9r" event={"ID":"65cc9566-177a-41b5-b00c-83290fa14641","Type":"ContainerStarted","Data":"53ce80ab8f5a80d429e5480165916f4f2f74db201f1b227be4e0edfb0b1d1fcf"} Jan 31 04:05:47 crc kubenswrapper[4667]: I0131 04:05:47.800143 4667 generic.go:334] "Generic (PLEG): container finished" podID="5c3f2700-4311-41b0-8b3a-20d1dd1db82f" containerID="3d55288e1ad6d29fe991ad00ab9924d42e4bba0be58b1cdb8eae289b38942fee" exitCode=0 Jan 31 04:05:47 crc kubenswrapper[4667]: I0131 04:05:47.800212 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-nh6hz" event={"ID":"5c3f2700-4311-41b0-8b3a-20d1dd1db82f","Type":"ContainerDied","Data":"3d55288e1ad6d29fe991ad00ab9924d42e4bba0be58b1cdb8eae289b38942fee"} Jan 31 04:05:47 crc kubenswrapper[4667]: I0131 04:05:47.811596 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" event={"ID":"b103bbd2-fb5d-4b2a-8b01-c32f699757df","Type":"ContainerStarted","Data":"4def2f985a42835fdac5d21069cf64f18010ecd6521e48ae16ef15b594559e50"} Jan 31 04:05:47 crc kubenswrapper[4667]: I0131 04:05:47.816431 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-p55cz" event={"ID":"b251653c-b193-413c-a483-57c4288ae3c4","Type":"ContainerStarted","Data":"78e89763bceccbecfbe1202910c776c519e0d7c00395f274122add8460a8db1c"} Jan 31 04:05:47 crc kubenswrapper[4667]: I0131 04:05:47.816470 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-p55cz" event={"ID":"b251653c-b193-413c-a483-57c4288ae3c4","Type":"ContainerStarted","Data":"bd0b8a3156e3c8351ac4f786ede2e1c56790a2775adebbfb759a0ab7e7d3b0e6"} Jan 31 04:05:47 crc kubenswrapper[4667]: I0131 04:05:47.819492 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-fpj9r" podStartSLOduration=3.370869706 podStartE2EDuration="10.819481135s" podCreationTimestamp="2026-01-31 04:05:37 +0000 UTC" firstStartedPulling="2026-01-31 04:05:39.024631574 +0000 UTC m=+1062.540966873" lastFinishedPulling="2026-01-31 04:05:46.473242993 +0000 UTC m=+1069.989578302" observedRunningTime="2026-01-31 04:05:47.817936704 +0000 UTC m=+1071.334272023" watchObservedRunningTime="2026-01-31 04:05:47.819481135 +0000 UTC m=+1071.335816434" Jan 31 04:05:47 crc kubenswrapper[4667]: I0131 04:05:47.846254 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-4flkf" event={"ID":"a74e72f1-41d7-476c-abc1-00c32bfb03d8","Type":"ContainerStarted","Data":"84146adce4abc4bc3e847090dac8d5de1a3ebfc375b768f87f849339b6dda849"} Jan 31 04:05:47 crc kubenswrapper[4667]: I0131 04:05:47.846303 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-4flkf" event={"ID":"a74e72f1-41d7-476c-abc1-00c32bfb03d8","Type":"ContainerStarted","Data":"cab5e6a3496d66d973eb19e39586d80c735819a6257df9e0c5f2867746b00b9b"} Jan 31 04:05:47 crc kubenswrapper[4667]: I0131 04:05:47.875795 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-p55cz" podStartSLOduration=6.875776655 podStartE2EDuration="6.875776655s" podCreationTimestamp="2026-01-31 04:05:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:05:47.873654719 +0000 UTC m=+1071.389990018" watchObservedRunningTime="2026-01-31 04:05:47.875776655 +0000 UTC m=+1071.392111954" Jan 31 04:05:47 crc kubenswrapper[4667]: I0131 04:05:47.924706 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-4flkf" podStartSLOduration=4.9246796889999995 podStartE2EDuration="4.924679689s" podCreationTimestamp="2026-01-31 04:05:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:05:47.903459868 +0000 UTC m=+1071.419795167" watchObservedRunningTime="2026-01-31 04:05:47.924679689 +0000 UTC m=+1071.441014988" Jan 31 04:05:48 crc kubenswrapper[4667]: I0131 04:05:48.021670 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-nh6hz" Jan 31 04:05:48 crc kubenswrapper[4667]: I0131 04:05:48.153070 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c3f2700-4311-41b0-8b3a-20d1dd1db82f-ovsdbserver-sb\") pod \"5c3f2700-4311-41b0-8b3a-20d1dd1db82f\" (UID: \"5c3f2700-4311-41b0-8b3a-20d1dd1db82f\") " Jan 31 04:05:48 crc kubenswrapper[4667]: I0131 04:05:48.153637 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c3f2700-4311-41b0-8b3a-20d1dd1db82f-config\") pod \"5c3f2700-4311-41b0-8b3a-20d1dd1db82f\" (UID: \"5c3f2700-4311-41b0-8b3a-20d1dd1db82f\") " Jan 31 04:05:48 crc kubenswrapper[4667]: I0131 04:05:48.153677 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqsmc\" (UniqueName: \"kubernetes.io/projected/5c3f2700-4311-41b0-8b3a-20d1dd1db82f-kube-api-access-tqsmc\") pod \"5c3f2700-4311-41b0-8b3a-20d1dd1db82f\" (UID: \"5c3f2700-4311-41b0-8b3a-20d1dd1db82f\") " Jan 31 04:05:48 crc kubenswrapper[4667]: I0131 04:05:48.153732 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c3f2700-4311-41b0-8b3a-20d1dd1db82f-ovsdbserver-nb\") pod \"5c3f2700-4311-41b0-8b3a-20d1dd1db82f\" (UID: \"5c3f2700-4311-41b0-8b3a-20d1dd1db82f\") " Jan 31 04:05:48 crc kubenswrapper[4667]: I0131 04:05:48.154000 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c3f2700-4311-41b0-8b3a-20d1dd1db82f-dns-svc\") pod \"5c3f2700-4311-41b0-8b3a-20d1dd1db82f\" (UID: \"5c3f2700-4311-41b0-8b3a-20d1dd1db82f\") " Jan 31 04:05:48 crc kubenswrapper[4667]: I0131 04:05:48.162129 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c3f2700-4311-41b0-8b3a-20d1dd1db82f-kube-api-access-tqsmc" (OuterVolumeSpecName: "kube-api-access-tqsmc") pod "5c3f2700-4311-41b0-8b3a-20d1dd1db82f" (UID: "5c3f2700-4311-41b0-8b3a-20d1dd1db82f"). InnerVolumeSpecName "kube-api-access-tqsmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:05:48 crc kubenswrapper[4667]: I0131 04:05:48.205603 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c3f2700-4311-41b0-8b3a-20d1dd1db82f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5c3f2700-4311-41b0-8b3a-20d1dd1db82f" (UID: "5c3f2700-4311-41b0-8b3a-20d1dd1db82f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:05:48 crc kubenswrapper[4667]: I0131 04:05:48.217511 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c3f2700-4311-41b0-8b3a-20d1dd1db82f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5c3f2700-4311-41b0-8b3a-20d1dd1db82f" (UID: "5c3f2700-4311-41b0-8b3a-20d1dd1db82f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:05:48 crc kubenswrapper[4667]: I0131 04:05:48.219353 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c3f2700-4311-41b0-8b3a-20d1dd1db82f-config" (OuterVolumeSpecName: "config") pod "5c3f2700-4311-41b0-8b3a-20d1dd1db82f" (UID: "5c3f2700-4311-41b0-8b3a-20d1dd1db82f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:05:48 crc kubenswrapper[4667]: I0131 04:05:48.226352 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c3f2700-4311-41b0-8b3a-20d1dd1db82f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5c3f2700-4311-41b0-8b3a-20d1dd1db82f" (UID: "5c3f2700-4311-41b0-8b3a-20d1dd1db82f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:05:48 crc kubenswrapper[4667]: I0131 04:05:48.256801 4667 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c3f2700-4311-41b0-8b3a-20d1dd1db82f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:48 crc kubenswrapper[4667]: I0131 04:05:48.259134 4667 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c3f2700-4311-41b0-8b3a-20d1dd1db82f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:48 crc kubenswrapper[4667]: I0131 04:05:48.259234 4667 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c3f2700-4311-41b0-8b3a-20d1dd1db82f-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:48 crc kubenswrapper[4667]: I0131 04:05:48.259294 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqsmc\" (UniqueName: \"kubernetes.io/projected/5c3f2700-4311-41b0-8b3a-20d1dd1db82f-kube-api-access-tqsmc\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:48 crc kubenswrapper[4667]: I0131 04:05:48.259350 4667 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c3f2700-4311-41b0-8b3a-20d1dd1db82f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:48 crc kubenswrapper[4667]: I0131 04:05:48.879525 4667 generic.go:334] "Generic (PLEG): container finished" podID="a74e72f1-41d7-476c-abc1-00c32bfb03d8" containerID="84146adce4abc4bc3e847090dac8d5de1a3ebfc375b768f87f849339b6dda849" exitCode=0 Jan 31 04:05:48 crc kubenswrapper[4667]: I0131 04:05:48.879690 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-4flkf" event={"ID":"a74e72f1-41d7-476c-abc1-00c32bfb03d8","Type":"ContainerDied","Data":"84146adce4abc4bc3e847090dac8d5de1a3ebfc375b768f87f849339b6dda849"} Jan 31 04:05:48 crc kubenswrapper[4667]: I0131 04:05:48.883914 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-nh6hz" event={"ID":"5c3f2700-4311-41b0-8b3a-20d1dd1db82f","Type":"ContainerDied","Data":"c7059081f800e139b361cd7fe1f99ce0f74719139120a6f3d616bb4e99af6906"} Jan 31 04:05:48 crc kubenswrapper[4667]: I0131 04:05:48.883932 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-nh6hz" Jan 31 04:05:48 crc kubenswrapper[4667]: I0131 04:05:48.883974 4667 scope.go:117] "RemoveContainer" containerID="3d55288e1ad6d29fe991ad00ab9924d42e4bba0be58b1cdb8eae289b38942fee" Jan 31 04:05:48 crc kubenswrapper[4667]: I0131 04:05:48.888701 4667 generic.go:334] "Generic (PLEG): container finished" podID="b251653c-b193-413c-a483-57c4288ae3c4" containerID="78e89763bceccbecfbe1202910c776c519e0d7c00395f274122add8460a8db1c" exitCode=0 Jan 31 04:05:48 crc kubenswrapper[4667]: I0131 04:05:48.890060 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-p55cz" event={"ID":"b251653c-b193-413c-a483-57c4288ae3c4","Type":"ContainerDied","Data":"78e89763bceccbecfbe1202910c776c519e0d7c00395f274122add8460a8db1c"} Jan 31 04:05:48 crc kubenswrapper[4667]: I0131 04:05:48.918556 4667 scope.go:117] "RemoveContainer" containerID="6087cdfdc5c24ec633d027c8af5cbad2f61895dc9144974a2c24bfba451f1cad" Jan 31 04:05:48 crc kubenswrapper[4667]: I0131 04:05:48.961978 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-nh6hz"] Jan 31 04:05:48 crc kubenswrapper[4667]: I0131 04:05:48.976079 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-nh6hz"] Jan 31 04:05:49 crc kubenswrapper[4667]: I0131 04:05:49.302804 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c3f2700-4311-41b0-8b3a-20d1dd1db82f" path="/var/lib/kubelet/pods/5c3f2700-4311-41b0-8b3a-20d1dd1db82f/volumes" Jan 31 04:05:49 crc kubenswrapper[4667]: I0131 04:05:49.305548 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 31 04:05:50 crc kubenswrapper[4667]: I0131 04:05:50.024609 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-cn9wc" podUID="39c3d98f-a6b1-4558-b565-c9f8c3afa543" containerName="ovn-controller" probeResult="failure" output=< Jan 31 04:05:50 crc kubenswrapper[4667]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 31 04:05:50 crc kubenswrapper[4667]: > Jan 31 04:05:50 crc kubenswrapper[4667]: I0131 04:05:50.499624 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p55cz" Jan 31 04:05:50 crc kubenswrapper[4667]: I0131 04:05:50.506263 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4flkf" Jan 31 04:05:50 crc kubenswrapper[4667]: I0131 04:05:50.610042 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a74e72f1-41d7-476c-abc1-00c32bfb03d8-operator-scripts\") pod \"a74e72f1-41d7-476c-abc1-00c32bfb03d8\" (UID: \"a74e72f1-41d7-476c-abc1-00c32bfb03d8\") " Jan 31 04:05:50 crc kubenswrapper[4667]: I0131 04:05:50.610157 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b251653c-b193-413c-a483-57c4288ae3c4-operator-scripts\") pod \"b251653c-b193-413c-a483-57c4288ae3c4\" (UID: \"b251653c-b193-413c-a483-57c4288ae3c4\") " Jan 31 04:05:50 crc kubenswrapper[4667]: I0131 04:05:50.610275 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqwvq\" (UniqueName: \"kubernetes.io/projected/b251653c-b193-413c-a483-57c4288ae3c4-kube-api-access-pqwvq\") pod \"b251653c-b193-413c-a483-57c4288ae3c4\" (UID: \"b251653c-b193-413c-a483-57c4288ae3c4\") " Jan 31 04:05:50 crc kubenswrapper[4667]: I0131 04:05:50.610331 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsgct\" (UniqueName: \"kubernetes.io/projected/a74e72f1-41d7-476c-abc1-00c32bfb03d8-kube-api-access-gsgct\") pod \"a74e72f1-41d7-476c-abc1-00c32bfb03d8\" (UID: \"a74e72f1-41d7-476c-abc1-00c32bfb03d8\") " Jan 31 04:05:50 crc kubenswrapper[4667]: I0131 04:05:50.611727 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b251653c-b193-413c-a483-57c4288ae3c4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b251653c-b193-413c-a483-57c4288ae3c4" (UID: "b251653c-b193-413c-a483-57c4288ae3c4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:05:50 crc kubenswrapper[4667]: I0131 04:05:50.612100 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a74e72f1-41d7-476c-abc1-00c32bfb03d8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a74e72f1-41d7-476c-abc1-00c32bfb03d8" (UID: "a74e72f1-41d7-476c-abc1-00c32bfb03d8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:05:50 crc kubenswrapper[4667]: I0131 04:05:50.641165 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b251653c-b193-413c-a483-57c4288ae3c4-kube-api-access-pqwvq" (OuterVolumeSpecName: "kube-api-access-pqwvq") pod "b251653c-b193-413c-a483-57c4288ae3c4" (UID: "b251653c-b193-413c-a483-57c4288ae3c4"). InnerVolumeSpecName "kube-api-access-pqwvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:05:50 crc kubenswrapper[4667]: I0131 04:05:50.651142 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a74e72f1-41d7-476c-abc1-00c32bfb03d8-kube-api-access-gsgct" (OuterVolumeSpecName: "kube-api-access-gsgct") pod "a74e72f1-41d7-476c-abc1-00c32bfb03d8" (UID: "a74e72f1-41d7-476c-abc1-00c32bfb03d8"). InnerVolumeSpecName "kube-api-access-gsgct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:05:50 crc kubenswrapper[4667]: I0131 04:05:50.714551 4667 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b251653c-b193-413c-a483-57c4288ae3c4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:50 crc kubenswrapper[4667]: I0131 04:05:50.714603 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqwvq\" (UniqueName: \"kubernetes.io/projected/b251653c-b193-413c-a483-57c4288ae3c4-kube-api-access-pqwvq\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:50 crc kubenswrapper[4667]: I0131 04:05:50.714651 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsgct\" (UniqueName: \"kubernetes.io/projected/a74e72f1-41d7-476c-abc1-00c32bfb03d8-kube-api-access-gsgct\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:50 crc kubenswrapper[4667]: I0131 04:05:50.714670 4667 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a74e72f1-41d7-476c-abc1-00c32bfb03d8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:05:50 crc kubenswrapper[4667]: I0131 04:05:50.912536 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-4flkf" event={"ID":"a74e72f1-41d7-476c-abc1-00c32bfb03d8","Type":"ContainerDied","Data":"cab5e6a3496d66d973eb19e39586d80c735819a6257df9e0c5f2867746b00b9b"} Jan 31 04:05:50 crc kubenswrapper[4667]: I0131 04:05:50.912607 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cab5e6a3496d66d973eb19e39586d80c735819a6257df9e0c5f2867746b00b9b" Jan 31 04:05:50 crc kubenswrapper[4667]: I0131 04:05:50.912573 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4flkf" Jan 31 04:05:50 crc kubenswrapper[4667]: I0131 04:05:50.915408 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-p55cz" event={"ID":"b251653c-b193-413c-a483-57c4288ae3c4","Type":"ContainerDied","Data":"bd0b8a3156e3c8351ac4f786ede2e1c56790a2775adebbfb759a0ab7e7d3b0e6"} Jan 31 04:05:50 crc kubenswrapper[4667]: I0131 04:05:50.915605 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd0b8a3156e3c8351ac4f786ede2e1c56790a2775adebbfb759a0ab7e7d3b0e6" Jan 31 04:05:50 crc kubenswrapper[4667]: I0131 04:05:50.915554 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p55cz" Jan 31 04:05:52 crc kubenswrapper[4667]: I0131 04:05:52.546097 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-p55cz"] Jan 31 04:05:52 crc kubenswrapper[4667]: I0131 04:05:52.557207 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-p55cz"] Jan 31 04:05:53 crc kubenswrapper[4667]: I0131 04:05:53.271039 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/49dfb349-068e-4574-9e26-3d413295d983-etc-swift\") pod \"swift-storage-0\" (UID: \"49dfb349-068e-4574-9e26-3d413295d983\") " pod="openstack/swift-storage-0" Jan 31 04:05:53 crc kubenswrapper[4667]: E0131 04:05:53.271222 4667 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 31 04:05:53 crc kubenswrapper[4667]: E0131 04:05:53.271240 4667 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 31 04:05:53 crc kubenswrapper[4667]: E0131 04:05:53.271300 4667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/49dfb349-068e-4574-9e26-3d413295d983-etc-swift podName:49dfb349-068e-4574-9e26-3d413295d983 nodeName:}" failed. No retries permitted until 2026-01-31 04:06:09.271283323 +0000 UTC m=+1092.787618622 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/49dfb349-068e-4574-9e26-3d413295d983-etc-swift") pod "swift-storage-0" (UID: "49dfb349-068e-4574-9e26-3d413295d983") : configmap "swift-ring-files" not found Jan 31 04:05:53 crc kubenswrapper[4667]: I0131 04:05:53.294986 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b251653c-b193-413c-a483-57c4288ae3c4" path="/var/lib/kubelet/pods/b251653c-b193-413c-a483-57c4288ae3c4/volumes" Jan 31 04:05:55 crc kubenswrapper[4667]: I0131 04:05:55.030756 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-cn9wc" podUID="39c3d98f-a6b1-4558-b565-c9f8c3afa543" containerName="ovn-controller" probeResult="failure" output=< Jan 31 04:05:55 crc kubenswrapper[4667]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 31 04:05:55 crc kubenswrapper[4667]: > Jan 31 04:05:55 crc kubenswrapper[4667]: I0131 04:05:55.050585 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-m545l" Jan 31 04:05:55 crc kubenswrapper[4667]: I0131 04:05:55.074700 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-m545l" Jan 31 04:05:55 crc kubenswrapper[4667]: I0131 04:05:55.296286 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-cn9wc-config-5rs25"] Jan 31 04:05:55 crc kubenswrapper[4667]: E0131 04:05:55.296691 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a74e72f1-41d7-476c-abc1-00c32bfb03d8" containerName="mariadb-database-create" Jan 31 04:05:55 crc kubenswrapper[4667]: I0131 04:05:55.296736 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="a74e72f1-41d7-476c-abc1-00c32bfb03d8" containerName="mariadb-database-create" Jan 31 04:05:55 crc kubenswrapper[4667]: E0131 04:05:55.296764 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b251653c-b193-413c-a483-57c4288ae3c4" containerName="mariadb-account-create-update" Jan 31 04:05:55 crc kubenswrapper[4667]: I0131 04:05:55.296772 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="b251653c-b193-413c-a483-57c4288ae3c4" containerName="mariadb-account-create-update" Jan 31 04:05:55 crc kubenswrapper[4667]: E0131 04:05:55.296809 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c3f2700-4311-41b0-8b3a-20d1dd1db82f" containerName="init" Jan 31 04:05:55 crc kubenswrapper[4667]: I0131 04:05:55.296817 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c3f2700-4311-41b0-8b3a-20d1dd1db82f" containerName="init" Jan 31 04:05:55 crc kubenswrapper[4667]: E0131 04:05:55.296832 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c3f2700-4311-41b0-8b3a-20d1dd1db82f" containerName="dnsmasq-dns" Jan 31 04:05:55 crc kubenswrapper[4667]: I0131 04:05:55.296864 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c3f2700-4311-41b0-8b3a-20d1dd1db82f" containerName="dnsmasq-dns" Jan 31 04:05:55 crc kubenswrapper[4667]: I0131 04:05:55.297136 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="b251653c-b193-413c-a483-57c4288ae3c4" containerName="mariadb-account-create-update" Jan 31 04:05:55 crc kubenswrapper[4667]: I0131 04:05:55.297180 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="a74e72f1-41d7-476c-abc1-00c32bfb03d8" containerName="mariadb-database-create" Jan 31 04:05:55 crc kubenswrapper[4667]: I0131 04:05:55.297204 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c3f2700-4311-41b0-8b3a-20d1dd1db82f" containerName="dnsmasq-dns" Jan 31 04:05:55 crc kubenswrapper[4667]: I0131 04:05:55.298201 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cn9wc-config-5rs25" Jan 31 04:05:55 crc kubenswrapper[4667]: I0131 04:05:55.301232 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 31 04:05:55 crc kubenswrapper[4667]: I0131 04:05:55.318925 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwb6b\" (UniqueName: \"kubernetes.io/projected/a33db910-c042-4186-a9a2-bb41aa3ec414-kube-api-access-nwb6b\") pod \"ovn-controller-cn9wc-config-5rs25\" (UID: \"a33db910-c042-4186-a9a2-bb41aa3ec414\") " pod="openstack/ovn-controller-cn9wc-config-5rs25" Jan 31 04:05:55 crc kubenswrapper[4667]: I0131 04:05:55.318975 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a33db910-c042-4186-a9a2-bb41aa3ec414-var-run-ovn\") pod \"ovn-controller-cn9wc-config-5rs25\" (UID: \"a33db910-c042-4186-a9a2-bb41aa3ec414\") " pod="openstack/ovn-controller-cn9wc-config-5rs25" Jan 31 04:05:55 crc kubenswrapper[4667]: I0131 04:05:55.319008 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a33db910-c042-4186-a9a2-bb41aa3ec414-scripts\") pod \"ovn-controller-cn9wc-config-5rs25\" (UID: \"a33db910-c042-4186-a9a2-bb41aa3ec414\") " pod="openstack/ovn-controller-cn9wc-config-5rs25" Jan 31 04:05:55 crc kubenswrapper[4667]: I0131 04:05:55.319138 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a33db910-c042-4186-a9a2-bb41aa3ec414-var-run\") pod \"ovn-controller-cn9wc-config-5rs25\" (UID: \"a33db910-c042-4186-a9a2-bb41aa3ec414\") " pod="openstack/ovn-controller-cn9wc-config-5rs25" Jan 31 04:05:55 crc kubenswrapper[4667]: I0131 04:05:55.319159 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a33db910-c042-4186-a9a2-bb41aa3ec414-var-log-ovn\") pod \"ovn-controller-cn9wc-config-5rs25\" (UID: \"a33db910-c042-4186-a9a2-bb41aa3ec414\") " pod="openstack/ovn-controller-cn9wc-config-5rs25" Jan 31 04:05:55 crc kubenswrapper[4667]: I0131 04:05:55.319227 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a33db910-c042-4186-a9a2-bb41aa3ec414-additional-scripts\") pod \"ovn-controller-cn9wc-config-5rs25\" (UID: \"a33db910-c042-4186-a9a2-bb41aa3ec414\") " pod="openstack/ovn-controller-cn9wc-config-5rs25" Jan 31 04:05:55 crc kubenswrapper[4667]: I0131 04:05:55.338018 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cn9wc-config-5rs25"] Jan 31 04:05:55 crc kubenswrapper[4667]: I0131 04:05:55.421184 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a33db910-c042-4186-a9a2-bb41aa3ec414-var-run\") pod \"ovn-controller-cn9wc-config-5rs25\" (UID: \"a33db910-c042-4186-a9a2-bb41aa3ec414\") " pod="openstack/ovn-controller-cn9wc-config-5rs25" Jan 31 04:05:55 crc kubenswrapper[4667]: I0131 04:05:55.421238 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a33db910-c042-4186-a9a2-bb41aa3ec414-var-log-ovn\") pod \"ovn-controller-cn9wc-config-5rs25\" (UID: \"a33db910-c042-4186-a9a2-bb41aa3ec414\") " pod="openstack/ovn-controller-cn9wc-config-5rs25" Jan 31 04:05:55 crc kubenswrapper[4667]: I0131 04:05:55.421288 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a33db910-c042-4186-a9a2-bb41aa3ec414-additional-scripts\") pod \"ovn-controller-cn9wc-config-5rs25\" (UID: \"a33db910-c042-4186-a9a2-bb41aa3ec414\") " pod="openstack/ovn-controller-cn9wc-config-5rs25" Jan 31 04:05:55 crc kubenswrapper[4667]: I0131 04:05:55.421357 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwb6b\" (UniqueName: \"kubernetes.io/projected/a33db910-c042-4186-a9a2-bb41aa3ec414-kube-api-access-nwb6b\") pod \"ovn-controller-cn9wc-config-5rs25\" (UID: \"a33db910-c042-4186-a9a2-bb41aa3ec414\") " pod="openstack/ovn-controller-cn9wc-config-5rs25" Jan 31 04:05:55 crc kubenswrapper[4667]: I0131 04:05:55.421384 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a33db910-c042-4186-a9a2-bb41aa3ec414-var-run-ovn\") pod \"ovn-controller-cn9wc-config-5rs25\" (UID: \"a33db910-c042-4186-a9a2-bb41aa3ec414\") " pod="openstack/ovn-controller-cn9wc-config-5rs25" Jan 31 04:05:55 crc kubenswrapper[4667]: I0131 04:05:55.421410 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a33db910-c042-4186-a9a2-bb41aa3ec414-scripts\") pod \"ovn-controller-cn9wc-config-5rs25\" (UID: \"a33db910-c042-4186-a9a2-bb41aa3ec414\") " pod="openstack/ovn-controller-cn9wc-config-5rs25" Jan 31 04:05:55 crc kubenswrapper[4667]: I0131 04:05:55.423417 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a33db910-c042-4186-a9a2-bb41aa3ec414-scripts\") pod \"ovn-controller-cn9wc-config-5rs25\" (UID: \"a33db910-c042-4186-a9a2-bb41aa3ec414\") " pod="openstack/ovn-controller-cn9wc-config-5rs25" Jan 31 04:05:55 crc kubenswrapper[4667]: I0131 04:05:55.423741 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a33db910-c042-4186-a9a2-bb41aa3ec414-var-run\") pod \"ovn-controller-cn9wc-config-5rs25\" (UID: \"a33db910-c042-4186-a9a2-bb41aa3ec414\") " pod="openstack/ovn-controller-cn9wc-config-5rs25" Jan 31 04:05:55 crc kubenswrapper[4667]: I0131 04:05:55.423813 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a33db910-c042-4186-a9a2-bb41aa3ec414-var-log-ovn\") pod \"ovn-controller-cn9wc-config-5rs25\" (UID: \"a33db910-c042-4186-a9a2-bb41aa3ec414\") " pod="openstack/ovn-controller-cn9wc-config-5rs25" Jan 31 04:05:55 crc kubenswrapper[4667]: I0131 04:05:55.424247 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a33db910-c042-4186-a9a2-bb41aa3ec414-additional-scripts\") pod \"ovn-controller-cn9wc-config-5rs25\" (UID: \"a33db910-c042-4186-a9a2-bb41aa3ec414\") " pod="openstack/ovn-controller-cn9wc-config-5rs25" Jan 31 04:05:55 crc kubenswrapper[4667]: I0131 04:05:55.424623 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a33db910-c042-4186-a9a2-bb41aa3ec414-var-run-ovn\") pod \"ovn-controller-cn9wc-config-5rs25\" (UID: \"a33db910-c042-4186-a9a2-bb41aa3ec414\") " pod="openstack/ovn-controller-cn9wc-config-5rs25" Jan 31 04:05:55 crc kubenswrapper[4667]: I0131 04:05:55.468160 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwb6b\" (UniqueName: \"kubernetes.io/projected/a33db910-c042-4186-a9a2-bb41aa3ec414-kube-api-access-nwb6b\") pod \"ovn-controller-cn9wc-config-5rs25\" (UID: \"a33db910-c042-4186-a9a2-bb41aa3ec414\") " pod="openstack/ovn-controller-cn9wc-config-5rs25" Jan 31 04:05:55 crc kubenswrapper[4667]: I0131 04:05:55.624512 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cn9wc-config-5rs25" Jan 31 04:05:56 crc kubenswrapper[4667]: I0131 04:05:56.248014 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-2tp4z"] Jan 31 04:05:56 crc kubenswrapper[4667]: I0131 04:05:56.249770 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2tp4z" Jan 31 04:05:56 crc kubenswrapper[4667]: I0131 04:05:56.252104 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 31 04:05:56 crc kubenswrapper[4667]: I0131 04:05:56.262061 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-2tp4z"] Jan 31 04:05:56 crc kubenswrapper[4667]: I0131 04:05:56.346805 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db5fe987-662e-4d91-b309-8dea3f6a33a8-operator-scripts\") pod \"root-account-create-update-2tp4z\" (UID: \"db5fe987-662e-4d91-b309-8dea3f6a33a8\") " pod="openstack/root-account-create-update-2tp4z" Jan 31 04:05:56 crc kubenswrapper[4667]: I0131 04:05:56.347192 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsw4s\" (UniqueName: \"kubernetes.io/projected/db5fe987-662e-4d91-b309-8dea3f6a33a8-kube-api-access-zsw4s\") pod \"root-account-create-update-2tp4z\" (UID: \"db5fe987-662e-4d91-b309-8dea3f6a33a8\") " pod="openstack/root-account-create-update-2tp4z" Jan 31 04:05:56 crc kubenswrapper[4667]: I0131 04:05:56.449528 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db5fe987-662e-4d91-b309-8dea3f6a33a8-operator-scripts\") pod \"root-account-create-update-2tp4z\" (UID: \"db5fe987-662e-4d91-b309-8dea3f6a33a8\") " pod="openstack/root-account-create-update-2tp4z" Jan 31 04:05:56 crc kubenswrapper[4667]: I0131 04:05:56.449695 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsw4s\" (UniqueName: \"kubernetes.io/projected/db5fe987-662e-4d91-b309-8dea3f6a33a8-kube-api-access-zsw4s\") pod \"root-account-create-update-2tp4z\" (UID: \"db5fe987-662e-4d91-b309-8dea3f6a33a8\") " pod="openstack/root-account-create-update-2tp4z" Jan 31 04:05:56 crc kubenswrapper[4667]: I0131 04:05:56.450736 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db5fe987-662e-4d91-b309-8dea3f6a33a8-operator-scripts\") pod \"root-account-create-update-2tp4z\" (UID: \"db5fe987-662e-4d91-b309-8dea3f6a33a8\") " pod="openstack/root-account-create-update-2tp4z" Jan 31 04:05:56 crc kubenswrapper[4667]: I0131 04:05:56.474591 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsw4s\" (UniqueName: \"kubernetes.io/projected/db5fe987-662e-4d91-b309-8dea3f6a33a8-kube-api-access-zsw4s\") pod \"root-account-create-update-2tp4z\" (UID: \"db5fe987-662e-4d91-b309-8dea3f6a33a8\") " pod="openstack/root-account-create-update-2tp4z" Jan 31 04:05:56 crc kubenswrapper[4667]: I0131 04:05:56.705005 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2tp4z" Jan 31 04:05:57 crc kubenswrapper[4667]: I0131 04:05:57.991427 4667 generic.go:334] "Generic (PLEG): container finished" podID="65cc9566-177a-41b5-b00c-83290fa14641" containerID="53ce80ab8f5a80d429e5480165916f4f2f74db201f1b227be4e0edfb0b1d1fcf" exitCode=0 Jan 31 04:05:57 crc kubenswrapper[4667]: I0131 04:05:57.991459 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-fpj9r" event={"ID":"65cc9566-177a-41b5-b00c-83290fa14641","Type":"ContainerDied","Data":"53ce80ab8f5a80d429e5480165916f4f2f74db201f1b227be4e0edfb0b1d1fcf"} Jan 31 04:05:59 crc kubenswrapper[4667]: I0131 04:05:59.981542 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-cn9wc" podUID="39c3d98f-a6b1-4558-b565-c9f8c3afa543" containerName="ovn-controller" probeResult="failure" output=< Jan 31 04:05:59 crc kubenswrapper[4667]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 31 04:05:59 crc kubenswrapper[4667]: > Jan 31 04:06:00 crc kubenswrapper[4667]: I0131 04:06:00.022672 4667 generic.go:334] "Generic (PLEG): container finished" podID="9265013e-d7ee-49cf-a5d8-c2f80066f459" containerID="4acd211c95f8f9b2d57a089d9d7532f112c96d9e87ccf2175d7359401f40ac7e" exitCode=0 Jan 31 04:06:00 crc kubenswrapper[4667]: I0131 04:06:00.022747 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9265013e-d7ee-49cf-a5d8-c2f80066f459","Type":"ContainerDied","Data":"4acd211c95f8f9b2d57a089d9d7532f112c96d9e87ccf2175d7359401f40ac7e"} Jan 31 04:06:01 crc kubenswrapper[4667]: I0131 04:06:01.034154 4667 generic.go:334] "Generic (PLEG): container finished" podID="bf3f1a21-51b1-4282-99e5-ab52084984c0" containerID="4efc4bdb3236480020f2071f80c15b872eb6c2e4c82ea5836e3deb47c3e785a5" exitCode=0 Jan 31 04:06:01 crc kubenswrapper[4667]: I0131 04:06:01.034272 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bf3f1a21-51b1-4282-99e5-ab52084984c0","Type":"ContainerDied","Data":"4efc4bdb3236480020f2071f80c15b872eb6c2e4c82ea5836e3deb47c3e785a5"} Jan 31 04:06:04 crc kubenswrapper[4667]: I0131 04:06:04.961397 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-cn9wc" podUID="39c3d98f-a6b1-4558-b565-c9f8c3afa543" containerName="ovn-controller" probeResult="failure" output=< Jan 31 04:06:04 crc kubenswrapper[4667]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 31 04:06:04 crc kubenswrapper[4667]: > Jan 31 04:06:07 crc kubenswrapper[4667]: E0131 04:06:07.106125 4667 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Jan 31 04:06:07 crc kubenswrapper[4667]: E0131 04:06:07.106394 4667 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mhcl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-9mmhm_openstack(959a81ea-7cf7-4fc4-b84d-14699d4e6bb4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 04:06:07 crc kubenswrapper[4667]: E0131 04:06:07.107503 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-9mmhm" podUID="959a81ea-7cf7-4fc4-b84d-14699d4e6bb4" Jan 31 04:06:07 crc kubenswrapper[4667]: I0131 04:06:07.147166 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-fpj9r" event={"ID":"65cc9566-177a-41b5-b00c-83290fa14641","Type":"ContainerDied","Data":"e111a6761b9165ce2e4183b0cb39c64ed3e7beb4a9fda04760b2e57bd76f721d"} Jan 31 04:06:07 crc kubenswrapper[4667]: I0131 04:06:07.147217 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e111a6761b9165ce2e4183b0cb39c64ed3e7beb4a9fda04760b2e57bd76f721d" Jan 31 04:06:07 crc kubenswrapper[4667]: I0131 04:06:07.208264 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-fpj9r" Jan 31 04:06:07 crc kubenswrapper[4667]: I0131 04:06:07.306347 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/65cc9566-177a-41b5-b00c-83290fa14641-etc-swift\") pod \"65cc9566-177a-41b5-b00c-83290fa14641\" (UID: \"65cc9566-177a-41b5-b00c-83290fa14641\") " Jan 31 04:06:07 crc kubenswrapper[4667]: I0131 04:06:07.306465 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65cc9566-177a-41b5-b00c-83290fa14641-scripts\") pod \"65cc9566-177a-41b5-b00c-83290fa14641\" (UID: \"65cc9566-177a-41b5-b00c-83290fa14641\") " Jan 31 04:06:07 crc kubenswrapper[4667]: I0131 04:06:07.306520 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65cc9566-177a-41b5-b00c-83290fa14641-combined-ca-bundle\") pod \"65cc9566-177a-41b5-b00c-83290fa14641\" (UID: \"65cc9566-177a-41b5-b00c-83290fa14641\") " Jan 31 04:06:07 crc kubenswrapper[4667]: I0131 04:06:07.306589 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/65cc9566-177a-41b5-b00c-83290fa14641-ring-data-devices\") pod \"65cc9566-177a-41b5-b00c-83290fa14641\" (UID: \"65cc9566-177a-41b5-b00c-83290fa14641\") " Jan 31 04:06:07 crc kubenswrapper[4667]: I0131 04:06:07.306628 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/65cc9566-177a-41b5-b00c-83290fa14641-dispersionconf\") pod \"65cc9566-177a-41b5-b00c-83290fa14641\" (UID: \"65cc9566-177a-41b5-b00c-83290fa14641\") " Jan 31 04:06:07 crc kubenswrapper[4667]: I0131 04:06:07.306650 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlvzl\" (UniqueName: \"kubernetes.io/projected/65cc9566-177a-41b5-b00c-83290fa14641-kube-api-access-hlvzl\") pod \"65cc9566-177a-41b5-b00c-83290fa14641\" (UID: \"65cc9566-177a-41b5-b00c-83290fa14641\") " Jan 31 04:06:07 crc kubenswrapper[4667]: I0131 04:06:07.306687 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/65cc9566-177a-41b5-b00c-83290fa14641-swiftconf\") pod \"65cc9566-177a-41b5-b00c-83290fa14641\" (UID: \"65cc9566-177a-41b5-b00c-83290fa14641\") " Jan 31 04:06:07 crc kubenswrapper[4667]: I0131 04:06:07.307455 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65cc9566-177a-41b5-b00c-83290fa14641-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "65cc9566-177a-41b5-b00c-83290fa14641" (UID: "65cc9566-177a-41b5-b00c-83290fa14641"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:06:07 crc kubenswrapper[4667]: I0131 04:06:07.308392 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65cc9566-177a-41b5-b00c-83290fa14641-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "65cc9566-177a-41b5-b00c-83290fa14641" (UID: "65cc9566-177a-41b5-b00c-83290fa14641"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:06:07 crc kubenswrapper[4667]: I0131 04:06:07.321828 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65cc9566-177a-41b5-b00c-83290fa14641-kube-api-access-hlvzl" (OuterVolumeSpecName: "kube-api-access-hlvzl") pod "65cc9566-177a-41b5-b00c-83290fa14641" (UID: "65cc9566-177a-41b5-b00c-83290fa14641"). InnerVolumeSpecName "kube-api-access-hlvzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:06:07 crc kubenswrapper[4667]: I0131 04:06:07.332646 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65cc9566-177a-41b5-b00c-83290fa14641-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "65cc9566-177a-41b5-b00c-83290fa14641" (UID: "65cc9566-177a-41b5-b00c-83290fa14641"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:07 crc kubenswrapper[4667]: I0131 04:06:07.384349 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65cc9566-177a-41b5-b00c-83290fa14641-scripts" (OuterVolumeSpecName: "scripts") pod "65cc9566-177a-41b5-b00c-83290fa14641" (UID: "65cc9566-177a-41b5-b00c-83290fa14641"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:06:07 crc kubenswrapper[4667]: I0131 04:06:07.410602 4667 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/65cc9566-177a-41b5-b00c-83290fa14641-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:07 crc kubenswrapper[4667]: I0131 04:06:07.434612 4667 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/65cc9566-177a-41b5-b00c-83290fa14641-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:07 crc kubenswrapper[4667]: I0131 04:06:07.434642 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlvzl\" (UniqueName: \"kubernetes.io/projected/65cc9566-177a-41b5-b00c-83290fa14641-kube-api-access-hlvzl\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:07 crc kubenswrapper[4667]: I0131 04:06:07.434665 4667 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/65cc9566-177a-41b5-b00c-83290fa14641-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:07 crc kubenswrapper[4667]: I0131 04:06:07.434676 4667 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65cc9566-177a-41b5-b00c-83290fa14641-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:07 crc kubenswrapper[4667]: I0131 04:06:07.412995 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65cc9566-177a-41b5-b00c-83290fa14641-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65cc9566-177a-41b5-b00c-83290fa14641" (UID: "65cc9566-177a-41b5-b00c-83290fa14641"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:07 crc kubenswrapper[4667]: I0131 04:06:07.431397 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65cc9566-177a-41b5-b00c-83290fa14641-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "65cc9566-177a-41b5-b00c-83290fa14641" (UID: "65cc9566-177a-41b5-b00c-83290fa14641"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:07 crc kubenswrapper[4667]: I0131 04:06:07.439461 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 31 04:06:07 crc kubenswrapper[4667]: I0131 04:06:07.456459 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-2tp4z"] Jan 31 04:06:07 crc kubenswrapper[4667]: I0131 04:06:07.536777 4667 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65cc9566-177a-41b5-b00c-83290fa14641-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:07 crc kubenswrapper[4667]: I0131 04:06:07.536858 4667 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/65cc9566-177a-41b5-b00c-83290fa14641-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:07 crc kubenswrapper[4667]: I0131 04:06:07.663591 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cn9wc-config-5rs25"] Jan 31 04:06:07 crc kubenswrapper[4667]: W0131 04:06:07.667008 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda33db910_c042_4186_a9a2_bb41aa3ec414.slice/crio-5a4938e679b154c5800f3a144af5a9a640a94c3f427873fd0a2c2532be7f234d WatchSource:0}: Error finding container 5a4938e679b154c5800f3a144af5a9a640a94c3f427873fd0a2c2532be7f234d: Status 404 returned error can't find the container with id 5a4938e679b154c5800f3a144af5a9a640a94c3f427873fd0a2c2532be7f234d Jan 31 04:06:08 crc kubenswrapper[4667]: I0131 04:06:08.157701 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cn9wc-config-5rs25" event={"ID":"a33db910-c042-4186-a9a2-bb41aa3ec414","Type":"ContainerStarted","Data":"3ff82ebcfa6f4e8664d3d0d067b26b896ceb57ef35b92a86460b6a27a3ab8ed1"} Jan 31 04:06:08 crc kubenswrapper[4667]: I0131 04:06:08.158084 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cn9wc-config-5rs25" event={"ID":"a33db910-c042-4186-a9a2-bb41aa3ec414","Type":"ContainerStarted","Data":"5a4938e679b154c5800f3a144af5a9a640a94c3f427873fd0a2c2532be7f234d"} Jan 31 04:06:08 crc kubenswrapper[4667]: I0131 04:06:08.161873 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9265013e-d7ee-49cf-a5d8-c2f80066f459","Type":"ContainerStarted","Data":"12ceeaf74fd60367ec42bafef6caa1f099c65b5f4b4c7cd6428da69cd8b65718"} Jan 31 04:06:08 crc kubenswrapper[4667]: I0131 04:06:08.162225 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:06:08 crc kubenswrapper[4667]: I0131 04:06:08.163769 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bf3f1a21-51b1-4282-99e5-ab52084984c0","Type":"ContainerStarted","Data":"901395cdc9ecff56b45d8817501b39d6e33ec130717eea1fff83080f85b4220b"} Jan 31 04:06:08 crc kubenswrapper[4667]: I0131 04:06:08.163955 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 31 04:06:08 crc kubenswrapper[4667]: I0131 04:06:08.165086 4667 generic.go:334] "Generic (PLEG): container finished" podID="db5fe987-662e-4d91-b309-8dea3f6a33a8" containerID="78e03a127cf37019a547972f09a4e97a6c89bf3b1a482407aa602e7a2e416f7b" exitCode=0 Jan 31 04:06:08 crc kubenswrapper[4667]: I0131 04:06:08.165731 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2tp4z" event={"ID":"db5fe987-662e-4d91-b309-8dea3f6a33a8","Type":"ContainerDied","Data":"78e03a127cf37019a547972f09a4e97a6c89bf3b1a482407aa602e7a2e416f7b"} Jan 31 04:06:08 crc kubenswrapper[4667]: I0131 04:06:08.165754 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2tp4z" event={"ID":"db5fe987-662e-4d91-b309-8dea3f6a33a8","Type":"ContainerStarted","Data":"943ee0361c75eefb095f4ff6f3dbc46b1689c8fbb64a3d732bc4d67b6a3ebcd6"} Jan 31 04:06:08 crc kubenswrapper[4667]: I0131 04:06:08.165800 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-fpj9r" Jan 31 04:06:08 crc kubenswrapper[4667]: E0131 04:06:08.168105 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-9mmhm" podUID="959a81ea-7cf7-4fc4-b84d-14699d4e6bb4" Jan 31 04:06:08 crc kubenswrapper[4667]: I0131 04:06:08.188119 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-cn9wc-config-5rs25" podStartSLOduration=13.18809424 podStartE2EDuration="13.18809424s" podCreationTimestamp="2026-01-31 04:05:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:06:08.185207234 +0000 UTC m=+1091.701542533" watchObservedRunningTime="2026-01-31 04:06:08.18809424 +0000 UTC m=+1091.704429539" Jan 31 04:06:08 crc kubenswrapper[4667]: I0131 04:06:08.217147 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=44.761981452 podStartE2EDuration="1m28.217121738s" podCreationTimestamp="2026-01-31 04:04:40 +0000 UTC" firstStartedPulling="2026-01-31 04:04:42.429055257 +0000 UTC m=+1005.945390546" lastFinishedPulling="2026-01-31 04:05:25.884195533 +0000 UTC m=+1049.400530832" observedRunningTime="2026-01-31 04:06:08.208793478 +0000 UTC m=+1091.725128777" watchObservedRunningTime="2026-01-31 04:06:08.217121738 +0000 UTC m=+1091.733457037" Jan 31 04:06:08 crc kubenswrapper[4667]: I0131 04:06:08.252890 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371947.60196 podStartE2EDuration="1m29.252816023s" podCreationTimestamp="2026-01-31 04:04:39 +0000 UTC" firstStartedPulling="2026-01-31 04:04:42.688090503 +0000 UTC m=+1006.204425802" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:06:08.244873823 +0000 UTC m=+1091.761209122" watchObservedRunningTime="2026-01-31 04:06:08.252816023 +0000 UTC m=+1091.769151322" Jan 31 04:06:09 crc kubenswrapper[4667]: I0131 04:06:09.180539 4667 generic.go:334] "Generic (PLEG): container finished" podID="a33db910-c042-4186-a9a2-bb41aa3ec414" containerID="3ff82ebcfa6f4e8664d3d0d067b26b896ceb57ef35b92a86460b6a27a3ab8ed1" exitCode=0 Jan 31 04:06:09 crc kubenswrapper[4667]: I0131 04:06:09.180945 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cn9wc-config-5rs25" event={"ID":"a33db910-c042-4186-a9a2-bb41aa3ec414","Type":"ContainerDied","Data":"3ff82ebcfa6f4e8664d3d0d067b26b896ceb57ef35b92a86460b6a27a3ab8ed1"} Jan 31 04:06:09 crc kubenswrapper[4667]: I0131 04:06:09.275344 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/49dfb349-068e-4574-9e26-3d413295d983-etc-swift\") pod \"swift-storage-0\" (UID: \"49dfb349-068e-4574-9e26-3d413295d983\") " pod="openstack/swift-storage-0" Jan 31 04:06:09 crc kubenswrapper[4667]: I0131 04:06:09.306552 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/49dfb349-068e-4574-9e26-3d413295d983-etc-swift\") pod \"swift-storage-0\" (UID: \"49dfb349-068e-4574-9e26-3d413295d983\") " pod="openstack/swift-storage-0" Jan 31 04:06:09 crc kubenswrapper[4667]: I0131 04:06:09.505356 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 31 04:06:09 crc kubenswrapper[4667]: I0131 04:06:09.628093 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2tp4z" Jan 31 04:06:09 crc kubenswrapper[4667]: I0131 04:06:09.683209 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db5fe987-662e-4d91-b309-8dea3f6a33a8-operator-scripts\") pod \"db5fe987-662e-4d91-b309-8dea3f6a33a8\" (UID: \"db5fe987-662e-4d91-b309-8dea3f6a33a8\") " Jan 31 04:06:09 crc kubenswrapper[4667]: I0131 04:06:09.683734 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsw4s\" (UniqueName: \"kubernetes.io/projected/db5fe987-662e-4d91-b309-8dea3f6a33a8-kube-api-access-zsw4s\") pod \"db5fe987-662e-4d91-b309-8dea3f6a33a8\" (UID: \"db5fe987-662e-4d91-b309-8dea3f6a33a8\") " Jan 31 04:06:09 crc kubenswrapper[4667]: I0131 04:06:09.684312 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db5fe987-662e-4d91-b309-8dea3f6a33a8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "db5fe987-662e-4d91-b309-8dea3f6a33a8" (UID: "db5fe987-662e-4d91-b309-8dea3f6a33a8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:06:09 crc kubenswrapper[4667]: I0131 04:06:09.689833 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db5fe987-662e-4d91-b309-8dea3f6a33a8-kube-api-access-zsw4s" (OuterVolumeSpecName: "kube-api-access-zsw4s") pod "db5fe987-662e-4d91-b309-8dea3f6a33a8" (UID: "db5fe987-662e-4d91-b309-8dea3f6a33a8"). InnerVolumeSpecName "kube-api-access-zsw4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:06:09 crc kubenswrapper[4667]: I0131 04:06:09.785448 4667 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db5fe987-662e-4d91-b309-8dea3f6a33a8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:09 crc kubenswrapper[4667]: I0131 04:06:09.785480 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsw4s\" (UniqueName: \"kubernetes.io/projected/db5fe987-662e-4d91-b309-8dea3f6a33a8-kube-api-access-zsw4s\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:09 crc kubenswrapper[4667]: I0131 04:06:09.969391 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-cn9wc" Jan 31 04:06:10 crc kubenswrapper[4667]: I0131 04:06:10.153152 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 31 04:06:10 crc kubenswrapper[4667]: I0131 04:06:10.192207 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2tp4z" Jan 31 04:06:10 crc kubenswrapper[4667]: I0131 04:06:10.196075 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2tp4z" event={"ID":"db5fe987-662e-4d91-b309-8dea3f6a33a8","Type":"ContainerDied","Data":"943ee0361c75eefb095f4ff6f3dbc46b1689c8fbb64a3d732bc4d67b6a3ebcd6"} Jan 31 04:06:10 crc kubenswrapper[4667]: I0131 04:06:10.196137 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="943ee0361c75eefb095f4ff6f3dbc46b1689c8fbb64a3d732bc4d67b6a3ebcd6" Jan 31 04:06:10 crc kubenswrapper[4667]: I0131 04:06:10.206598 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"49dfb349-068e-4574-9e26-3d413295d983","Type":"ContainerStarted","Data":"0c8fa81de70821d20479274e7ed26be958e3bd8ac8416af0fa2d01ff34873fa5"} Jan 31 04:06:10 crc kubenswrapper[4667]: I0131 04:06:10.636194 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cn9wc-config-5rs25" Jan 31 04:06:10 crc kubenswrapper[4667]: I0131 04:06:10.780886 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-cn9wc-config-5rs25"] Jan 31 04:06:10 crc kubenswrapper[4667]: I0131 04:06:10.799996 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-cn9wc-config-5rs25"] Jan 31 04:06:10 crc kubenswrapper[4667]: I0131 04:06:10.818014 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a33db910-c042-4186-a9a2-bb41aa3ec414-additional-scripts\") pod \"a33db910-c042-4186-a9a2-bb41aa3ec414\" (UID: \"a33db910-c042-4186-a9a2-bb41aa3ec414\") " Jan 31 04:06:10 crc kubenswrapper[4667]: I0131 04:06:10.819093 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwb6b\" (UniqueName: \"kubernetes.io/projected/a33db910-c042-4186-a9a2-bb41aa3ec414-kube-api-access-nwb6b\") pod \"a33db910-c042-4186-a9a2-bb41aa3ec414\" (UID: \"a33db910-c042-4186-a9a2-bb41aa3ec414\") " Jan 31 04:06:10 crc kubenswrapper[4667]: I0131 04:06:10.819405 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a33db910-c042-4186-a9a2-bb41aa3ec414-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "a33db910-c042-4186-a9a2-bb41aa3ec414" (UID: "a33db910-c042-4186-a9a2-bb41aa3ec414"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:06:10 crc kubenswrapper[4667]: I0131 04:06:10.819887 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a33db910-c042-4186-a9a2-bb41aa3ec414-var-run-ovn\") pod \"a33db910-c042-4186-a9a2-bb41aa3ec414\" (UID: \"a33db910-c042-4186-a9a2-bb41aa3ec414\") " Jan 31 04:06:10 crc kubenswrapper[4667]: I0131 04:06:10.820051 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a33db910-c042-4186-a9a2-bb41aa3ec414-var-log-ovn\") pod \"a33db910-c042-4186-a9a2-bb41aa3ec414\" (UID: \"a33db910-c042-4186-a9a2-bb41aa3ec414\") " Jan 31 04:06:10 crc kubenswrapper[4667]: I0131 04:06:10.820094 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a33db910-c042-4186-a9a2-bb41aa3ec414-scripts\") pod \"a33db910-c042-4186-a9a2-bb41aa3ec414\" (UID: \"a33db910-c042-4186-a9a2-bb41aa3ec414\") " Jan 31 04:06:10 crc kubenswrapper[4667]: I0131 04:06:10.820135 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a33db910-c042-4186-a9a2-bb41aa3ec414-var-run\") pod \"a33db910-c042-4186-a9a2-bb41aa3ec414\" (UID: \"a33db910-c042-4186-a9a2-bb41aa3ec414\") " Jan 31 04:06:10 crc kubenswrapper[4667]: I0131 04:06:10.820192 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a33db910-c042-4186-a9a2-bb41aa3ec414-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "a33db910-c042-4186-a9a2-bb41aa3ec414" (UID: "a33db910-c042-4186-a9a2-bb41aa3ec414"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:06:10 crc kubenswrapper[4667]: I0131 04:06:10.820356 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a33db910-c042-4186-a9a2-bb41aa3ec414-var-run" (OuterVolumeSpecName: "var-run") pod "a33db910-c042-4186-a9a2-bb41aa3ec414" (UID: "a33db910-c042-4186-a9a2-bb41aa3ec414"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:06:10 crc kubenswrapper[4667]: I0131 04:06:10.820564 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a33db910-c042-4186-a9a2-bb41aa3ec414-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "a33db910-c042-4186-a9a2-bb41aa3ec414" (UID: "a33db910-c042-4186-a9a2-bb41aa3ec414"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:06:10 crc kubenswrapper[4667]: I0131 04:06:10.820968 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a33db910-c042-4186-a9a2-bb41aa3ec414-scripts" (OuterVolumeSpecName: "scripts") pod "a33db910-c042-4186-a9a2-bb41aa3ec414" (UID: "a33db910-c042-4186-a9a2-bb41aa3ec414"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:06:10 crc kubenswrapper[4667]: I0131 04:06:10.821339 4667 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a33db910-c042-4186-a9a2-bb41aa3ec414-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:10 crc kubenswrapper[4667]: I0131 04:06:10.821363 4667 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a33db910-c042-4186-a9a2-bb41aa3ec414-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:10 crc kubenswrapper[4667]: I0131 04:06:10.821373 4667 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a33db910-c042-4186-a9a2-bb41aa3ec414-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:10 crc kubenswrapper[4667]: I0131 04:06:10.821383 4667 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a33db910-c042-4186-a9a2-bb41aa3ec414-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:10 crc kubenswrapper[4667]: I0131 04:06:10.821394 4667 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a33db910-c042-4186-a9a2-bb41aa3ec414-var-run\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:10 crc kubenswrapper[4667]: I0131 04:06:10.825676 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a33db910-c042-4186-a9a2-bb41aa3ec414-kube-api-access-nwb6b" (OuterVolumeSpecName: "kube-api-access-nwb6b") pod "a33db910-c042-4186-a9a2-bb41aa3ec414" (UID: "a33db910-c042-4186-a9a2-bb41aa3ec414"). InnerVolumeSpecName "kube-api-access-nwb6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:06:10 crc kubenswrapper[4667]: I0131 04:06:10.905519 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-cn9wc-config-vlj6z"] Jan 31 04:06:10 crc kubenswrapper[4667]: E0131 04:06:10.906368 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db5fe987-662e-4d91-b309-8dea3f6a33a8" containerName="mariadb-account-create-update" Jan 31 04:06:10 crc kubenswrapper[4667]: I0131 04:06:10.906479 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="db5fe987-662e-4d91-b309-8dea3f6a33a8" containerName="mariadb-account-create-update" Jan 31 04:06:10 crc kubenswrapper[4667]: E0131 04:06:10.910280 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a33db910-c042-4186-a9a2-bb41aa3ec414" containerName="ovn-config" Jan 31 04:06:10 crc kubenswrapper[4667]: I0131 04:06:10.910450 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="a33db910-c042-4186-a9a2-bb41aa3ec414" containerName="ovn-config" Jan 31 04:06:10 crc kubenswrapper[4667]: E0131 04:06:10.910531 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65cc9566-177a-41b5-b00c-83290fa14641" containerName="swift-ring-rebalance" Jan 31 04:06:10 crc kubenswrapper[4667]: I0131 04:06:10.910591 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="65cc9566-177a-41b5-b00c-83290fa14641" containerName="swift-ring-rebalance" Jan 31 04:06:10 crc kubenswrapper[4667]: I0131 04:06:10.910989 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="a33db910-c042-4186-a9a2-bb41aa3ec414" containerName="ovn-config" Jan 31 04:06:10 crc kubenswrapper[4667]: I0131 04:06:10.911075 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="db5fe987-662e-4d91-b309-8dea3f6a33a8" containerName="mariadb-account-create-update" Jan 31 04:06:10 crc kubenswrapper[4667]: I0131 04:06:10.911165 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="65cc9566-177a-41b5-b00c-83290fa14641" containerName="swift-ring-rebalance" Jan 31 04:06:10 crc kubenswrapper[4667]: I0131 04:06:10.912063 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cn9wc-config-vlj6z" Jan 31 04:06:10 crc kubenswrapper[4667]: I0131 04:06:10.923976 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b1f2d92c-1644-4fb2-90e0-96efa69d5928-var-run-ovn\") pod \"ovn-controller-cn9wc-config-vlj6z\" (UID: \"b1f2d92c-1644-4fb2-90e0-96efa69d5928\") " pod="openstack/ovn-controller-cn9wc-config-vlj6z" Jan 31 04:06:10 crc kubenswrapper[4667]: I0131 04:06:10.924047 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b1f2d92c-1644-4fb2-90e0-96efa69d5928-additional-scripts\") pod \"ovn-controller-cn9wc-config-vlj6z\" (UID: \"b1f2d92c-1644-4fb2-90e0-96efa69d5928\") " pod="openstack/ovn-controller-cn9wc-config-vlj6z" Jan 31 04:06:10 crc kubenswrapper[4667]: I0131 04:06:10.924110 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b1f2d92c-1644-4fb2-90e0-96efa69d5928-var-run\") pod \"ovn-controller-cn9wc-config-vlj6z\" (UID: \"b1f2d92c-1644-4fb2-90e0-96efa69d5928\") " pod="openstack/ovn-controller-cn9wc-config-vlj6z" Jan 31 04:06:10 crc kubenswrapper[4667]: I0131 04:06:10.924131 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b1f2d92c-1644-4fb2-90e0-96efa69d5928-var-log-ovn\") pod \"ovn-controller-cn9wc-config-vlj6z\" (UID: \"b1f2d92c-1644-4fb2-90e0-96efa69d5928\") " pod="openstack/ovn-controller-cn9wc-config-vlj6z" Jan 31 04:06:10 crc kubenswrapper[4667]: I0131 04:06:10.924179 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26z6k\" (UniqueName: \"kubernetes.io/projected/b1f2d92c-1644-4fb2-90e0-96efa69d5928-kube-api-access-26z6k\") pod \"ovn-controller-cn9wc-config-vlj6z\" (UID: \"b1f2d92c-1644-4fb2-90e0-96efa69d5928\") " pod="openstack/ovn-controller-cn9wc-config-vlj6z" Jan 31 04:06:10 crc kubenswrapper[4667]: I0131 04:06:10.924211 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b1f2d92c-1644-4fb2-90e0-96efa69d5928-scripts\") pod \"ovn-controller-cn9wc-config-vlj6z\" (UID: \"b1f2d92c-1644-4fb2-90e0-96efa69d5928\") " pod="openstack/ovn-controller-cn9wc-config-vlj6z" Jan 31 04:06:10 crc kubenswrapper[4667]: I0131 04:06:10.924249 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwb6b\" (UniqueName: \"kubernetes.io/projected/a33db910-c042-4186-a9a2-bb41aa3ec414-kube-api-access-nwb6b\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:10 crc kubenswrapper[4667]: I0131 04:06:10.932478 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cn9wc-config-vlj6z"] Jan 31 04:06:11 crc kubenswrapper[4667]: I0131 04:06:11.024729 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b1f2d92c-1644-4fb2-90e0-96efa69d5928-var-run\") pod \"ovn-controller-cn9wc-config-vlj6z\" (UID: \"b1f2d92c-1644-4fb2-90e0-96efa69d5928\") " pod="openstack/ovn-controller-cn9wc-config-vlj6z" Jan 31 04:06:11 crc kubenswrapper[4667]: I0131 04:06:11.024771 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b1f2d92c-1644-4fb2-90e0-96efa69d5928-var-log-ovn\") pod \"ovn-controller-cn9wc-config-vlj6z\" (UID: \"b1f2d92c-1644-4fb2-90e0-96efa69d5928\") " pod="openstack/ovn-controller-cn9wc-config-vlj6z" Jan 31 04:06:11 crc kubenswrapper[4667]: I0131 04:06:11.024819 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26z6k\" (UniqueName: \"kubernetes.io/projected/b1f2d92c-1644-4fb2-90e0-96efa69d5928-kube-api-access-26z6k\") pod \"ovn-controller-cn9wc-config-vlj6z\" (UID: \"b1f2d92c-1644-4fb2-90e0-96efa69d5928\") " pod="openstack/ovn-controller-cn9wc-config-vlj6z" Jan 31 04:06:11 crc kubenswrapper[4667]: I0131 04:06:11.024867 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b1f2d92c-1644-4fb2-90e0-96efa69d5928-scripts\") pod \"ovn-controller-cn9wc-config-vlj6z\" (UID: \"b1f2d92c-1644-4fb2-90e0-96efa69d5928\") " pod="openstack/ovn-controller-cn9wc-config-vlj6z" Jan 31 04:06:11 crc kubenswrapper[4667]: I0131 04:06:11.024887 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b1f2d92c-1644-4fb2-90e0-96efa69d5928-var-run-ovn\") pod \"ovn-controller-cn9wc-config-vlj6z\" (UID: \"b1f2d92c-1644-4fb2-90e0-96efa69d5928\") " pod="openstack/ovn-controller-cn9wc-config-vlj6z" Jan 31 04:06:11 crc kubenswrapper[4667]: I0131 04:06:11.024917 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b1f2d92c-1644-4fb2-90e0-96efa69d5928-additional-scripts\") pod \"ovn-controller-cn9wc-config-vlj6z\" (UID: \"b1f2d92c-1644-4fb2-90e0-96efa69d5928\") " pod="openstack/ovn-controller-cn9wc-config-vlj6z" Jan 31 04:06:11 crc kubenswrapper[4667]: I0131 04:06:11.025654 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b1f2d92c-1644-4fb2-90e0-96efa69d5928-additional-scripts\") pod \"ovn-controller-cn9wc-config-vlj6z\" (UID: \"b1f2d92c-1644-4fb2-90e0-96efa69d5928\") " pod="openstack/ovn-controller-cn9wc-config-vlj6z" Jan 31 04:06:11 crc kubenswrapper[4667]: I0131 04:06:11.026006 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b1f2d92c-1644-4fb2-90e0-96efa69d5928-var-run-ovn\") pod \"ovn-controller-cn9wc-config-vlj6z\" (UID: \"b1f2d92c-1644-4fb2-90e0-96efa69d5928\") " pod="openstack/ovn-controller-cn9wc-config-vlj6z" Jan 31 04:06:11 crc kubenswrapper[4667]: I0131 04:06:11.026052 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b1f2d92c-1644-4fb2-90e0-96efa69d5928-var-log-ovn\") pod \"ovn-controller-cn9wc-config-vlj6z\" (UID: \"b1f2d92c-1644-4fb2-90e0-96efa69d5928\") " pod="openstack/ovn-controller-cn9wc-config-vlj6z" Jan 31 04:06:11 crc kubenswrapper[4667]: I0131 04:06:11.026197 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b1f2d92c-1644-4fb2-90e0-96efa69d5928-var-run\") pod \"ovn-controller-cn9wc-config-vlj6z\" (UID: \"b1f2d92c-1644-4fb2-90e0-96efa69d5928\") " pod="openstack/ovn-controller-cn9wc-config-vlj6z" Jan 31 04:06:11 crc kubenswrapper[4667]: I0131 04:06:11.028421 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b1f2d92c-1644-4fb2-90e0-96efa69d5928-scripts\") pod \"ovn-controller-cn9wc-config-vlj6z\" (UID: \"b1f2d92c-1644-4fb2-90e0-96efa69d5928\") " pod="openstack/ovn-controller-cn9wc-config-vlj6z" Jan 31 04:06:11 crc kubenswrapper[4667]: I0131 04:06:11.042897 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26z6k\" (UniqueName: \"kubernetes.io/projected/b1f2d92c-1644-4fb2-90e0-96efa69d5928-kube-api-access-26z6k\") pod \"ovn-controller-cn9wc-config-vlj6z\" (UID: \"b1f2d92c-1644-4fb2-90e0-96efa69d5928\") " pod="openstack/ovn-controller-cn9wc-config-vlj6z" Jan 31 04:06:11 crc kubenswrapper[4667]: I0131 04:06:11.217445 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a4938e679b154c5800f3a144af5a9a640a94c3f427873fd0a2c2532be7f234d" Jan 31 04:06:11 crc kubenswrapper[4667]: I0131 04:06:11.217510 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cn9wc-config-5rs25" Jan 31 04:06:11 crc kubenswrapper[4667]: I0131 04:06:11.250549 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cn9wc-config-vlj6z" Jan 31 04:06:11 crc kubenswrapper[4667]: I0131 04:06:11.292258 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a33db910-c042-4186-a9a2-bb41aa3ec414" path="/var/lib/kubelet/pods/a33db910-c042-4186-a9a2-bb41aa3ec414/volumes" Jan 31 04:06:12 crc kubenswrapper[4667]: I0131 04:06:12.144887 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cn9wc-config-vlj6z"] Jan 31 04:06:12 crc kubenswrapper[4667]: I0131 04:06:12.254886 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cn9wc-config-vlj6z" event={"ID":"b1f2d92c-1644-4fb2-90e0-96efa69d5928","Type":"ContainerStarted","Data":"e40bac33cd863316a15b67fb467c92cbcf2599f3ab3a391715f1c5bdab7f3ac8"} Jan 31 04:06:12 crc kubenswrapper[4667]: I0131 04:06:12.264062 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"49dfb349-068e-4574-9e26-3d413295d983","Type":"ContainerStarted","Data":"99cfc1fd2bbc103e88168324a6863e45c9ab0ce8801b657235a07f017c705977"} Jan 31 04:06:12 crc kubenswrapper[4667]: I0131 04:06:12.264125 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"49dfb349-068e-4574-9e26-3d413295d983","Type":"ContainerStarted","Data":"e759362a4eb7fc65fa2480fde96d91de7355c3cb5a2ef4f043f7032cd5b28da6"} Jan 31 04:06:12 crc kubenswrapper[4667]: I0131 04:06:12.577594 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-2tp4z"] Jan 31 04:06:12 crc kubenswrapper[4667]: I0131 04:06:12.586063 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-2tp4z"] Jan 31 04:06:13 crc kubenswrapper[4667]: I0131 04:06:13.274689 4667 generic.go:334] "Generic (PLEG): container finished" podID="b1f2d92c-1644-4fb2-90e0-96efa69d5928" containerID="f5619ebae99cdb634e963f926a5321d80524970329dd65fff8723f0849bf7d3e" exitCode=0 Jan 31 04:06:13 crc kubenswrapper[4667]: I0131 04:06:13.274881 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cn9wc-config-vlj6z" event={"ID":"b1f2d92c-1644-4fb2-90e0-96efa69d5928","Type":"ContainerDied","Data":"f5619ebae99cdb634e963f926a5321d80524970329dd65fff8723f0849bf7d3e"} Jan 31 04:06:13 crc kubenswrapper[4667]: I0131 04:06:13.279192 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"49dfb349-068e-4574-9e26-3d413295d983","Type":"ContainerStarted","Data":"b2e2ed164bf21c7623af4e71dd5bbcbc5dcb1d923859a59c69082a9697068580"} Jan 31 04:06:13 crc kubenswrapper[4667]: I0131 04:06:13.279244 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"49dfb349-068e-4574-9e26-3d413295d983","Type":"ContainerStarted","Data":"0328589f0274fedaeabce1bff55fc1461a17b91c7aca46b3702e7eae922e0ee9"} Jan 31 04:06:13 crc kubenswrapper[4667]: I0131 04:06:13.299187 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db5fe987-662e-4d91-b309-8dea3f6a33a8" path="/var/lib/kubelet/pods/db5fe987-662e-4d91-b309-8dea3f6a33a8/volumes" Jan 31 04:06:14 crc kubenswrapper[4667]: I0131 04:06:14.292351 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"49dfb349-068e-4574-9e26-3d413295d983","Type":"ContainerStarted","Data":"cd8c819127c463359d5483c88fa2ae5fdae81c7be2b404dd7dd59f67345911c9"} Jan 31 04:06:14 crc kubenswrapper[4667]: I0131 04:06:14.631185 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cn9wc-config-vlj6z" Jan 31 04:06:14 crc kubenswrapper[4667]: I0131 04:06:14.716068 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b1f2d92c-1644-4fb2-90e0-96efa69d5928-var-log-ovn\") pod \"b1f2d92c-1644-4fb2-90e0-96efa69d5928\" (UID: \"b1f2d92c-1644-4fb2-90e0-96efa69d5928\") " Jan 31 04:06:14 crc kubenswrapper[4667]: I0131 04:06:14.716167 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b1f2d92c-1644-4fb2-90e0-96efa69d5928-scripts\") pod \"b1f2d92c-1644-4fb2-90e0-96efa69d5928\" (UID: \"b1f2d92c-1644-4fb2-90e0-96efa69d5928\") " Jan 31 04:06:14 crc kubenswrapper[4667]: I0131 04:06:14.716191 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1f2d92c-1644-4fb2-90e0-96efa69d5928-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "b1f2d92c-1644-4fb2-90e0-96efa69d5928" (UID: "b1f2d92c-1644-4fb2-90e0-96efa69d5928"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:06:14 crc kubenswrapper[4667]: I0131 04:06:14.716271 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b1f2d92c-1644-4fb2-90e0-96efa69d5928-additional-scripts\") pod \"b1f2d92c-1644-4fb2-90e0-96efa69d5928\" (UID: \"b1f2d92c-1644-4fb2-90e0-96efa69d5928\") " Jan 31 04:06:14 crc kubenswrapper[4667]: I0131 04:06:14.716291 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b1f2d92c-1644-4fb2-90e0-96efa69d5928-var-run\") pod \"b1f2d92c-1644-4fb2-90e0-96efa69d5928\" (UID: \"b1f2d92c-1644-4fb2-90e0-96efa69d5928\") " Jan 31 04:06:14 crc kubenswrapper[4667]: I0131 04:06:14.716336 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b1f2d92c-1644-4fb2-90e0-96efa69d5928-var-run-ovn\") pod \"b1f2d92c-1644-4fb2-90e0-96efa69d5928\" (UID: \"b1f2d92c-1644-4fb2-90e0-96efa69d5928\") " Jan 31 04:06:14 crc kubenswrapper[4667]: I0131 04:06:14.716356 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26z6k\" (UniqueName: \"kubernetes.io/projected/b1f2d92c-1644-4fb2-90e0-96efa69d5928-kube-api-access-26z6k\") pod \"b1f2d92c-1644-4fb2-90e0-96efa69d5928\" (UID: \"b1f2d92c-1644-4fb2-90e0-96efa69d5928\") " Jan 31 04:06:14 crc kubenswrapper[4667]: I0131 04:06:14.716914 4667 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b1f2d92c-1644-4fb2-90e0-96efa69d5928-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:14 crc kubenswrapper[4667]: I0131 04:06:14.717558 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1f2d92c-1644-4fb2-90e0-96efa69d5928-var-run" (OuterVolumeSpecName: "var-run") pod "b1f2d92c-1644-4fb2-90e0-96efa69d5928" (UID: "b1f2d92c-1644-4fb2-90e0-96efa69d5928"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:06:14 crc kubenswrapper[4667]: I0131 04:06:14.717566 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1f2d92c-1644-4fb2-90e0-96efa69d5928-scripts" (OuterVolumeSpecName: "scripts") pod "b1f2d92c-1644-4fb2-90e0-96efa69d5928" (UID: "b1f2d92c-1644-4fb2-90e0-96efa69d5928"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:06:14 crc kubenswrapper[4667]: I0131 04:06:14.717654 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1f2d92c-1644-4fb2-90e0-96efa69d5928-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "b1f2d92c-1644-4fb2-90e0-96efa69d5928" (UID: "b1f2d92c-1644-4fb2-90e0-96efa69d5928"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:06:14 crc kubenswrapper[4667]: I0131 04:06:14.720390 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1f2d92c-1644-4fb2-90e0-96efa69d5928-kube-api-access-26z6k" (OuterVolumeSpecName: "kube-api-access-26z6k") pod "b1f2d92c-1644-4fb2-90e0-96efa69d5928" (UID: "b1f2d92c-1644-4fb2-90e0-96efa69d5928"). InnerVolumeSpecName "kube-api-access-26z6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:06:14 crc kubenswrapper[4667]: I0131 04:06:14.721047 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1f2d92c-1644-4fb2-90e0-96efa69d5928-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "b1f2d92c-1644-4fb2-90e0-96efa69d5928" (UID: "b1f2d92c-1644-4fb2-90e0-96efa69d5928"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:06:14 crc kubenswrapper[4667]: I0131 04:06:14.819332 4667 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b1f2d92c-1644-4fb2-90e0-96efa69d5928-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:14 crc kubenswrapper[4667]: I0131 04:06:14.819381 4667 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b1f2d92c-1644-4fb2-90e0-96efa69d5928-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:14 crc kubenswrapper[4667]: I0131 04:06:14.819397 4667 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b1f2d92c-1644-4fb2-90e0-96efa69d5928-var-run\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:14 crc kubenswrapper[4667]: I0131 04:06:14.819407 4667 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b1f2d92c-1644-4fb2-90e0-96efa69d5928-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:14 crc kubenswrapper[4667]: I0131 04:06:14.819420 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26z6k\" (UniqueName: \"kubernetes.io/projected/b1f2d92c-1644-4fb2-90e0-96efa69d5928-kube-api-access-26z6k\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:15 crc kubenswrapper[4667]: I0131 04:06:15.303209 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cn9wc-config-vlj6z" Jan 31 04:06:15 crc kubenswrapper[4667]: I0131 04:06:15.303204 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cn9wc-config-vlj6z" event={"ID":"b1f2d92c-1644-4fb2-90e0-96efa69d5928","Type":"ContainerDied","Data":"e40bac33cd863316a15b67fb467c92cbcf2599f3ab3a391715f1c5bdab7f3ac8"} Jan 31 04:06:15 crc kubenswrapper[4667]: I0131 04:06:15.303880 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e40bac33cd863316a15b67fb467c92cbcf2599f3ab3a391715f1c5bdab7f3ac8" Jan 31 04:06:15 crc kubenswrapper[4667]: I0131 04:06:15.311020 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"49dfb349-068e-4574-9e26-3d413295d983","Type":"ContainerStarted","Data":"0c2945cf5ea37668dae47ea9fe1216b87c9eb7fe32b00a80052a5fc6ae26ae1d"} Jan 31 04:06:15 crc kubenswrapper[4667]: I0131 04:06:15.311149 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"49dfb349-068e-4574-9e26-3d413295d983","Type":"ContainerStarted","Data":"de0c211ceddb07e1e60d614d2e19e1a70e5b31443b98a2f036db3d296f16a5cb"} Jan 31 04:06:15 crc kubenswrapper[4667]: I0131 04:06:15.311173 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"49dfb349-068e-4574-9e26-3d413295d983","Type":"ContainerStarted","Data":"8ce41b8b791f1e4ea8dfe0448a7da3c5326a5c93a4941e63aaef357b15ff919c"} Jan 31 04:06:15 crc kubenswrapper[4667]: I0131 04:06:15.726002 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-cn9wc-config-vlj6z"] Jan 31 04:06:15 crc kubenswrapper[4667]: I0131 04:06:15.736276 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-cn9wc-config-vlj6z"] Jan 31 04:06:17 crc kubenswrapper[4667]: I0131 04:06:17.308463 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1f2d92c-1644-4fb2-90e0-96efa69d5928" path="/var/lib/kubelet/pods/b1f2d92c-1644-4fb2-90e0-96efa69d5928/volumes" Jan 31 04:06:17 crc kubenswrapper[4667]: I0131 04:06:17.411484 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"49dfb349-068e-4574-9e26-3d413295d983","Type":"ContainerStarted","Data":"4a9e418bece156cddfd435b66d36a931e3fe3dce2e24be45ef8aec801d4c1034"} Jan 31 04:06:17 crc kubenswrapper[4667]: I0131 04:06:17.411538 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"49dfb349-068e-4574-9e26-3d413295d983","Type":"ContainerStarted","Data":"52ccd559ba079a8bf968e7f16a5a3382aa6dc4ecd0bef43c0730acdb424f8a5f"} Jan 31 04:06:17 crc kubenswrapper[4667]: I0131 04:06:17.411548 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"49dfb349-068e-4574-9e26-3d413295d983","Type":"ContainerStarted","Data":"c7de4c347f85ce9001f2ed337a7bf283d34ab943f26fe5bf10852b6c36aaf474"} Jan 31 04:06:17 crc kubenswrapper[4667]: I0131 04:06:17.411557 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"49dfb349-068e-4574-9e26-3d413295d983","Type":"ContainerStarted","Data":"68ebf6cdd125bd33d147eb6759c25e48347eab68145b437c3eb64aa975a44946"} Jan 31 04:06:17 crc kubenswrapper[4667]: I0131 04:06:17.411565 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"49dfb349-068e-4574-9e26-3d413295d983","Type":"ContainerStarted","Data":"b53fa81770d93b89a67d4d028e5b1dfd999445322bac3ccfd7643cc41d78bbaf"} Jan 31 04:06:17 crc kubenswrapper[4667]: I0131 04:06:17.411573 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"49dfb349-068e-4574-9e26-3d413295d983","Type":"ContainerStarted","Data":"934a1c3f5ec7b9cd701be86ff8f908e91a24eba4c97053b8f3c5c7da0f1cf918"} Jan 31 04:06:17 crc kubenswrapper[4667]: I0131 04:06:17.600178 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-dvs8p"] Jan 31 04:06:17 crc kubenswrapper[4667]: E0131 04:06:17.600530 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1f2d92c-1644-4fb2-90e0-96efa69d5928" containerName="ovn-config" Jan 31 04:06:17 crc kubenswrapper[4667]: I0131 04:06:17.600544 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1f2d92c-1644-4fb2-90e0-96efa69d5928" containerName="ovn-config" Jan 31 04:06:17 crc kubenswrapper[4667]: I0131 04:06:17.600713 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1f2d92c-1644-4fb2-90e0-96efa69d5928" containerName="ovn-config" Jan 31 04:06:17 crc kubenswrapper[4667]: I0131 04:06:17.601264 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dvs8p" Jan 31 04:06:17 crc kubenswrapper[4667]: I0131 04:06:17.612620 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-dvs8p"] Jan 31 04:06:17 crc kubenswrapper[4667]: I0131 04:06:17.613750 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 31 04:06:17 crc kubenswrapper[4667]: I0131 04:06:17.671602 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks9ch\" (UniqueName: \"kubernetes.io/projected/2fd7f92b-44c9-4765-99cf-9a42006f9f83-kube-api-access-ks9ch\") pod \"root-account-create-update-dvs8p\" (UID: \"2fd7f92b-44c9-4765-99cf-9a42006f9f83\") " pod="openstack/root-account-create-update-dvs8p" Jan 31 04:06:17 crc kubenswrapper[4667]: I0131 04:06:17.671758 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2fd7f92b-44c9-4765-99cf-9a42006f9f83-operator-scripts\") pod \"root-account-create-update-dvs8p\" (UID: \"2fd7f92b-44c9-4765-99cf-9a42006f9f83\") " pod="openstack/root-account-create-update-dvs8p" Jan 31 04:06:17 crc kubenswrapper[4667]: I0131 04:06:17.773124 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2fd7f92b-44c9-4765-99cf-9a42006f9f83-operator-scripts\") pod \"root-account-create-update-dvs8p\" (UID: \"2fd7f92b-44c9-4765-99cf-9a42006f9f83\") " pod="openstack/root-account-create-update-dvs8p" Jan 31 04:06:17 crc kubenswrapper[4667]: I0131 04:06:17.773193 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks9ch\" (UniqueName: \"kubernetes.io/projected/2fd7f92b-44c9-4765-99cf-9a42006f9f83-kube-api-access-ks9ch\") pod \"root-account-create-update-dvs8p\" (UID: \"2fd7f92b-44c9-4765-99cf-9a42006f9f83\") " pod="openstack/root-account-create-update-dvs8p" Jan 31 04:06:17 crc kubenswrapper[4667]: I0131 04:06:17.774356 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2fd7f92b-44c9-4765-99cf-9a42006f9f83-operator-scripts\") pod \"root-account-create-update-dvs8p\" (UID: \"2fd7f92b-44c9-4765-99cf-9a42006f9f83\") " pod="openstack/root-account-create-update-dvs8p" Jan 31 04:06:17 crc kubenswrapper[4667]: I0131 04:06:17.794511 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks9ch\" (UniqueName: \"kubernetes.io/projected/2fd7f92b-44c9-4765-99cf-9a42006f9f83-kube-api-access-ks9ch\") pod \"root-account-create-update-dvs8p\" (UID: \"2fd7f92b-44c9-4765-99cf-9a42006f9f83\") " pod="openstack/root-account-create-update-dvs8p" Jan 31 04:06:17 crc kubenswrapper[4667]: I0131 04:06:17.928739 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dvs8p" Jan 31 04:06:18 crc kubenswrapper[4667]: I0131 04:06:18.178063 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-dvs8p"] Jan 31 04:06:18 crc kubenswrapper[4667]: W0131 04:06:18.185013 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fd7f92b_44c9_4765_99cf_9a42006f9f83.slice/crio-a06afa51d34f9329e956665be0f8c78e672ad9cd95dc1b58100c2609c2a7dbf4 WatchSource:0}: Error finding container a06afa51d34f9329e956665be0f8c78e672ad9cd95dc1b58100c2609c2a7dbf4: Status 404 returned error can't find the container with id a06afa51d34f9329e956665be0f8c78e672ad9cd95dc1b58100c2609c2a7dbf4 Jan 31 04:06:18 crc kubenswrapper[4667]: I0131 04:06:18.420888 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dvs8p" event={"ID":"2fd7f92b-44c9-4765-99cf-9a42006f9f83","Type":"ContainerStarted","Data":"1644e21cd24370357d969e91285153a749284d12235479af8d2eabe76e8f328b"} Jan 31 04:06:18 crc kubenswrapper[4667]: I0131 04:06:18.420945 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dvs8p" event={"ID":"2fd7f92b-44c9-4765-99cf-9a42006f9f83","Type":"ContainerStarted","Data":"a06afa51d34f9329e956665be0f8c78e672ad9cd95dc1b58100c2609c2a7dbf4"} Jan 31 04:06:18 crc kubenswrapper[4667]: I0131 04:06:18.426333 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"49dfb349-068e-4574-9e26-3d413295d983","Type":"ContainerStarted","Data":"57221fbc09ea9d1e17b2f03d758bbff87f2337bc0d115a6ef6358264beffe8cf"} Jan 31 04:06:18 crc kubenswrapper[4667]: I0131 04:06:18.440585 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-dvs8p" podStartSLOduration=1.4405600619999999 podStartE2EDuration="1.440560062s" podCreationTimestamp="2026-01-31 04:06:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:06:18.435165659 +0000 UTC m=+1101.951500948" watchObservedRunningTime="2026-01-31 04:06:18.440560062 +0000 UTC m=+1101.956895361" Jan 31 04:06:18 crc kubenswrapper[4667]: I0131 04:06:18.487570 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=36.529902239 podStartE2EDuration="42.487550895s" podCreationTimestamp="2026-01-31 04:05:36 +0000 UTC" firstStartedPulling="2026-01-31 04:06:10.152399161 +0000 UTC m=+1093.668734460" lastFinishedPulling="2026-01-31 04:06:16.110047817 +0000 UTC m=+1099.626383116" observedRunningTime="2026-01-31 04:06:18.483828647 +0000 UTC m=+1102.000163966" watchObservedRunningTime="2026-01-31 04:06:18.487550895 +0000 UTC m=+1102.003886184" Jan 31 04:06:18 crc kubenswrapper[4667]: I0131 04:06:18.771029 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-9n7xp"] Jan 31 04:06:18 crc kubenswrapper[4667]: I0131 04:06:18.772734 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-9n7xp" Jan 31 04:06:18 crc kubenswrapper[4667]: I0131 04:06:18.776022 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 31 04:06:18 crc kubenswrapper[4667]: I0131 04:06:18.787049 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-9n7xp\" (UID: \"ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1\") " pod="openstack/dnsmasq-dns-764c5664d7-9n7xp" Jan 31 04:06:18 crc kubenswrapper[4667]: I0131 04:06:18.787111 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1-config\") pod \"dnsmasq-dns-764c5664d7-9n7xp\" (UID: \"ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1\") " pod="openstack/dnsmasq-dns-764c5664d7-9n7xp" Jan 31 04:06:18 crc kubenswrapper[4667]: I0131 04:06:18.787177 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmgtx\" (UniqueName: \"kubernetes.io/projected/ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1-kube-api-access-qmgtx\") pod \"dnsmasq-dns-764c5664d7-9n7xp\" (UID: \"ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1\") " pod="openstack/dnsmasq-dns-764c5664d7-9n7xp" Jan 31 04:06:18 crc kubenswrapper[4667]: I0131 04:06:18.787202 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-9n7xp\" (UID: \"ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1\") " pod="openstack/dnsmasq-dns-764c5664d7-9n7xp" Jan 31 04:06:18 crc kubenswrapper[4667]: I0131 04:06:18.787225 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-9n7xp\" (UID: \"ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1\") " pod="openstack/dnsmasq-dns-764c5664d7-9n7xp" Jan 31 04:06:18 crc kubenswrapper[4667]: I0131 04:06:18.787242 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1-dns-svc\") pod \"dnsmasq-dns-764c5664d7-9n7xp\" (UID: \"ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1\") " pod="openstack/dnsmasq-dns-764c5664d7-9n7xp" Jan 31 04:06:18 crc kubenswrapper[4667]: I0131 04:06:18.799218 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-9n7xp"] Jan 31 04:06:18 crc kubenswrapper[4667]: I0131 04:06:18.888517 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmgtx\" (UniqueName: \"kubernetes.io/projected/ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1-kube-api-access-qmgtx\") pod \"dnsmasq-dns-764c5664d7-9n7xp\" (UID: \"ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1\") " pod="openstack/dnsmasq-dns-764c5664d7-9n7xp" Jan 31 04:06:18 crc kubenswrapper[4667]: I0131 04:06:18.888564 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-9n7xp\" (UID: \"ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1\") " pod="openstack/dnsmasq-dns-764c5664d7-9n7xp" Jan 31 04:06:18 crc kubenswrapper[4667]: I0131 04:06:18.888619 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-9n7xp\" (UID: \"ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1\") " pod="openstack/dnsmasq-dns-764c5664d7-9n7xp" Jan 31 04:06:18 crc kubenswrapper[4667]: I0131 04:06:18.889593 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-9n7xp\" (UID: \"ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1\") " pod="openstack/dnsmasq-dns-764c5664d7-9n7xp" Jan 31 04:06:18 crc kubenswrapper[4667]: I0131 04:06:18.889619 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-9n7xp\" (UID: \"ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1\") " pod="openstack/dnsmasq-dns-764c5664d7-9n7xp" Jan 31 04:06:18 crc kubenswrapper[4667]: I0131 04:06:18.889641 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1-dns-svc\") pod \"dnsmasq-dns-764c5664d7-9n7xp\" (UID: \"ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1\") " pod="openstack/dnsmasq-dns-764c5664d7-9n7xp" Jan 31 04:06:18 crc kubenswrapper[4667]: I0131 04:06:18.889825 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-9n7xp\" (UID: \"ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1\") " pod="openstack/dnsmasq-dns-764c5664d7-9n7xp" Jan 31 04:06:18 crc kubenswrapper[4667]: I0131 04:06:18.889890 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1-dns-svc\") pod \"dnsmasq-dns-764c5664d7-9n7xp\" (UID: \"ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1\") " pod="openstack/dnsmasq-dns-764c5664d7-9n7xp" Jan 31 04:06:18 crc kubenswrapper[4667]: I0131 04:06:18.889995 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1-config\") pod \"dnsmasq-dns-764c5664d7-9n7xp\" (UID: \"ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1\") " pod="openstack/dnsmasq-dns-764c5664d7-9n7xp" Jan 31 04:06:18 crc kubenswrapper[4667]: I0131 04:06:18.890415 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-9n7xp\" (UID: \"ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1\") " pod="openstack/dnsmasq-dns-764c5664d7-9n7xp" Jan 31 04:06:18 crc kubenswrapper[4667]: I0131 04:06:18.890911 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1-config\") pod \"dnsmasq-dns-764c5664d7-9n7xp\" (UID: \"ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1\") " pod="openstack/dnsmasq-dns-764c5664d7-9n7xp" Jan 31 04:06:18 crc kubenswrapper[4667]: I0131 04:06:18.912228 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmgtx\" (UniqueName: \"kubernetes.io/projected/ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1-kube-api-access-qmgtx\") pod \"dnsmasq-dns-764c5664d7-9n7xp\" (UID: \"ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1\") " pod="openstack/dnsmasq-dns-764c5664d7-9n7xp" Jan 31 04:06:19 crc kubenswrapper[4667]: I0131 04:06:19.086422 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-9n7xp" Jan 31 04:06:19 crc kubenswrapper[4667]: I0131 04:06:19.435361 4667 generic.go:334] "Generic (PLEG): container finished" podID="2fd7f92b-44c9-4765-99cf-9a42006f9f83" containerID="1644e21cd24370357d969e91285153a749284d12235479af8d2eabe76e8f328b" exitCode=0 Jan 31 04:06:19 crc kubenswrapper[4667]: I0131 04:06:19.435538 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dvs8p" event={"ID":"2fd7f92b-44c9-4765-99cf-9a42006f9f83","Type":"ContainerDied","Data":"1644e21cd24370357d969e91285153a749284d12235479af8d2eabe76e8f328b"} Jan 31 04:06:19 crc kubenswrapper[4667]: I0131 04:06:19.597491 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-9n7xp"] Jan 31 04:06:20 crc kubenswrapper[4667]: I0131 04:06:20.453172 4667 generic.go:334] "Generic (PLEG): container finished" podID="ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1" containerID="8779413dc87b5848812a23d54aac7d5441b5b6978b0b62e0a87cf2ee4da33894" exitCode=0 Jan 31 04:06:20 crc kubenswrapper[4667]: I0131 04:06:20.453362 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-9n7xp" event={"ID":"ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1","Type":"ContainerDied","Data":"8779413dc87b5848812a23d54aac7d5441b5b6978b0b62e0a87cf2ee4da33894"} Jan 31 04:06:20 crc kubenswrapper[4667]: I0131 04:06:20.454221 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-9n7xp" event={"ID":"ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1","Type":"ContainerStarted","Data":"a78ecca05c334fbd01b9b014d0bff08872e28937787b58779d937f727ac0a660"} Jan 31 04:06:20 crc kubenswrapper[4667]: I0131 04:06:20.760102 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dvs8p" Jan 31 04:06:20 crc kubenswrapper[4667]: I0131 04:06:20.948666 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2fd7f92b-44c9-4765-99cf-9a42006f9f83-operator-scripts\") pod \"2fd7f92b-44c9-4765-99cf-9a42006f9f83\" (UID: \"2fd7f92b-44c9-4765-99cf-9a42006f9f83\") " Jan 31 04:06:20 crc kubenswrapper[4667]: I0131 04:06:20.949171 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks9ch\" (UniqueName: \"kubernetes.io/projected/2fd7f92b-44c9-4765-99cf-9a42006f9f83-kube-api-access-ks9ch\") pod \"2fd7f92b-44c9-4765-99cf-9a42006f9f83\" (UID: \"2fd7f92b-44c9-4765-99cf-9a42006f9f83\") " Jan 31 04:06:20 crc kubenswrapper[4667]: I0131 04:06:20.949280 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fd7f92b-44c9-4765-99cf-9a42006f9f83-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2fd7f92b-44c9-4765-99cf-9a42006f9f83" (UID: "2fd7f92b-44c9-4765-99cf-9a42006f9f83"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:06:20 crc kubenswrapper[4667]: I0131 04:06:20.950028 4667 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2fd7f92b-44c9-4765-99cf-9a42006f9f83-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:20 crc kubenswrapper[4667]: I0131 04:06:20.954146 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fd7f92b-44c9-4765-99cf-9a42006f9f83-kube-api-access-ks9ch" (OuterVolumeSpecName: "kube-api-access-ks9ch") pod "2fd7f92b-44c9-4765-99cf-9a42006f9f83" (UID: "2fd7f92b-44c9-4765-99cf-9a42006f9f83"). InnerVolumeSpecName "kube-api-access-ks9ch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:06:21 crc kubenswrapper[4667]: I0131 04:06:21.051309 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ks9ch\" (UniqueName: \"kubernetes.io/projected/2fd7f92b-44c9-4765-99cf-9a42006f9f83-kube-api-access-ks9ch\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:21 crc kubenswrapper[4667]: I0131 04:06:21.463086 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dvs8p" Jan 31 04:06:21 crc kubenswrapper[4667]: I0131 04:06:21.463317 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dvs8p" event={"ID":"2fd7f92b-44c9-4765-99cf-9a42006f9f83","Type":"ContainerDied","Data":"a06afa51d34f9329e956665be0f8c78e672ad9cd95dc1b58100c2609c2a7dbf4"} Jan 31 04:06:21 crc kubenswrapper[4667]: I0131 04:06:21.463359 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a06afa51d34f9329e956665be0f8c78e672ad9cd95dc1b58100c2609c2a7dbf4" Jan 31 04:06:21 crc kubenswrapper[4667]: I0131 04:06:21.475002 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-9n7xp" event={"ID":"ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1","Type":"ContainerStarted","Data":"6f17dcd2e4af8382cd7b790476bde52a1a9da0e2ad82b484738e0fc7f69ea4b3"} Jan 31 04:06:21 crc kubenswrapper[4667]: I0131 04:06:21.475145 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-9n7xp" Jan 31 04:06:21 crc kubenswrapper[4667]: I0131 04:06:21.482918 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 31 04:06:21 crc kubenswrapper[4667]: I0131 04:06:21.495832 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-9n7xp" podStartSLOduration=3.495812368 podStartE2EDuration="3.495812368s" podCreationTimestamp="2026-01-31 04:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:06:21.49400662 +0000 UTC m=+1105.010341939" watchObservedRunningTime="2026-01-31 04:06:21.495812368 +0000 UTC m=+1105.012147667" Jan 31 04:06:21 crc kubenswrapper[4667]: I0131 04:06:21.538043 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:06:21 crc kubenswrapper[4667]: I0131 04:06:21.970934 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-glth8"] Jan 31 04:06:21 crc kubenswrapper[4667]: E0131 04:06:21.974011 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fd7f92b-44c9-4765-99cf-9a42006f9f83" containerName="mariadb-account-create-update" Jan 31 04:06:21 crc kubenswrapper[4667]: I0131 04:06:21.974058 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fd7f92b-44c9-4765-99cf-9a42006f9f83" containerName="mariadb-account-create-update" Jan 31 04:06:21 crc kubenswrapper[4667]: I0131 04:06:21.974355 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fd7f92b-44c9-4765-99cf-9a42006f9f83" containerName="mariadb-account-create-update" Jan 31 04:06:21 crc kubenswrapper[4667]: I0131 04:06:21.976319 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-glth8" Jan 31 04:06:21 crc kubenswrapper[4667]: I0131 04:06:21.984293 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-glth8"] Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.054725 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-dzwfc"] Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.055724 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dzwfc" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.065502 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-dzwfc"] Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.073140 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/120c8a1f-7144-4b39-9040-7ffc70da2eb2-operator-scripts\") pod \"barbican-db-create-glth8\" (UID: \"120c8a1f-7144-4b39-9040-7ffc70da2eb2\") " pod="openstack/barbican-db-create-glth8" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.073183 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzcg8\" (UniqueName: \"kubernetes.io/projected/120c8a1f-7144-4b39-9040-7ffc70da2eb2-kube-api-access-rzcg8\") pod \"barbican-db-create-glth8\" (UID: \"120c8a1f-7144-4b39-9040-7ffc70da2eb2\") " pod="openstack/barbican-db-create-glth8" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.166250 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-dd40-account-create-update-kqrht"] Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.167279 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dd40-account-create-update-kqrht" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.172669 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.175396 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/007ac74b-f070-45f8-9cf9-1807ec2563f2-operator-scripts\") pod \"cinder-db-create-dzwfc\" (UID: \"007ac74b-f070-45f8-9cf9-1807ec2563f2\") " pod="openstack/cinder-db-create-dzwfc" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.175442 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl2nn\" (UniqueName: \"kubernetes.io/projected/007ac74b-f070-45f8-9cf9-1807ec2563f2-kube-api-access-fl2nn\") pod \"cinder-db-create-dzwfc\" (UID: \"007ac74b-f070-45f8-9cf9-1807ec2563f2\") " pod="openstack/cinder-db-create-dzwfc" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.175505 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/120c8a1f-7144-4b39-9040-7ffc70da2eb2-operator-scripts\") pod \"barbican-db-create-glth8\" (UID: \"120c8a1f-7144-4b39-9040-7ffc70da2eb2\") " pod="openstack/barbican-db-create-glth8" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.175525 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzcg8\" (UniqueName: \"kubernetes.io/projected/120c8a1f-7144-4b39-9040-7ffc70da2eb2-kube-api-access-rzcg8\") pod \"barbican-db-create-glth8\" (UID: \"120c8a1f-7144-4b39-9040-7ffc70da2eb2\") " pod="openstack/barbican-db-create-glth8" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.176617 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/120c8a1f-7144-4b39-9040-7ffc70da2eb2-operator-scripts\") pod \"barbican-db-create-glth8\" (UID: \"120c8a1f-7144-4b39-9040-7ffc70da2eb2\") " pod="openstack/barbican-db-create-glth8" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.180477 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-dd40-account-create-update-kqrht"] Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.233703 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzcg8\" (UniqueName: \"kubernetes.io/projected/120c8a1f-7144-4b39-9040-7ffc70da2eb2-kube-api-access-rzcg8\") pod \"barbican-db-create-glth8\" (UID: \"120c8a1f-7144-4b39-9040-7ffc70da2eb2\") " pod="openstack/barbican-db-create-glth8" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.277516 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/007ac74b-f070-45f8-9cf9-1807ec2563f2-operator-scripts\") pod \"cinder-db-create-dzwfc\" (UID: \"007ac74b-f070-45f8-9cf9-1807ec2563f2\") " pod="openstack/cinder-db-create-dzwfc" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.277565 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl2nn\" (UniqueName: \"kubernetes.io/projected/007ac74b-f070-45f8-9cf9-1807ec2563f2-kube-api-access-fl2nn\") pod \"cinder-db-create-dzwfc\" (UID: \"007ac74b-f070-45f8-9cf9-1807ec2563f2\") " pod="openstack/cinder-db-create-dzwfc" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.277606 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2edbaebb-5022-48a4-82ab-2cb5b23fae97-operator-scripts\") pod \"cinder-dd40-account-create-update-kqrht\" (UID: \"2edbaebb-5022-48a4-82ab-2cb5b23fae97\") " pod="openstack/cinder-dd40-account-create-update-kqrht" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.277679 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkmjz\" (UniqueName: \"kubernetes.io/projected/2edbaebb-5022-48a4-82ab-2cb5b23fae97-kube-api-access-fkmjz\") pod \"cinder-dd40-account-create-update-kqrht\" (UID: \"2edbaebb-5022-48a4-82ab-2cb5b23fae97\") " pod="openstack/cinder-dd40-account-create-update-kqrht" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.278493 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/007ac74b-f070-45f8-9cf9-1807ec2563f2-operator-scripts\") pod \"cinder-db-create-dzwfc\" (UID: \"007ac74b-f070-45f8-9cf9-1807ec2563f2\") " pod="openstack/cinder-db-create-dzwfc" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.295722 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-glth8" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.297666 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-607a-account-create-update-ktk8n"] Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.299177 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-607a-account-create-update-ktk8n" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.306028 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.308363 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-607a-account-create-update-ktk8n"] Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.328251 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl2nn\" (UniqueName: \"kubernetes.io/projected/007ac74b-f070-45f8-9cf9-1807ec2563f2-kube-api-access-fl2nn\") pod \"cinder-db-create-dzwfc\" (UID: \"007ac74b-f070-45f8-9cf9-1807ec2563f2\") " pod="openstack/cinder-db-create-dzwfc" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.369826 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dzwfc" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.382391 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfzcn\" (UniqueName: \"kubernetes.io/projected/36961045-4f23-401f-92a0-2fe30920bdf6-kube-api-access-xfzcn\") pod \"barbican-607a-account-create-update-ktk8n\" (UID: \"36961045-4f23-401f-92a0-2fe30920bdf6\") " pod="openstack/barbican-607a-account-create-update-ktk8n" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.382461 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkmjz\" (UniqueName: \"kubernetes.io/projected/2edbaebb-5022-48a4-82ab-2cb5b23fae97-kube-api-access-fkmjz\") pod \"cinder-dd40-account-create-update-kqrht\" (UID: \"2edbaebb-5022-48a4-82ab-2cb5b23fae97\") " pod="openstack/cinder-dd40-account-create-update-kqrht" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.382487 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36961045-4f23-401f-92a0-2fe30920bdf6-operator-scripts\") pod \"barbican-607a-account-create-update-ktk8n\" (UID: \"36961045-4f23-401f-92a0-2fe30920bdf6\") " pod="openstack/barbican-607a-account-create-update-ktk8n" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.382572 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2edbaebb-5022-48a4-82ab-2cb5b23fae97-operator-scripts\") pod \"cinder-dd40-account-create-update-kqrht\" (UID: \"2edbaebb-5022-48a4-82ab-2cb5b23fae97\") " pod="openstack/cinder-dd40-account-create-update-kqrht" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.383613 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2edbaebb-5022-48a4-82ab-2cb5b23fae97-operator-scripts\") pod \"cinder-dd40-account-create-update-kqrht\" (UID: \"2edbaebb-5022-48a4-82ab-2cb5b23fae97\") " pod="openstack/cinder-dd40-account-create-update-kqrht" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.401967 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-fmcdw"] Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.407721 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkmjz\" (UniqueName: \"kubernetes.io/projected/2edbaebb-5022-48a4-82ab-2cb5b23fae97-kube-api-access-fkmjz\") pod \"cinder-dd40-account-create-update-kqrht\" (UID: \"2edbaebb-5022-48a4-82ab-2cb5b23fae97\") " pod="openstack/cinder-dd40-account-create-update-kqrht" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.413765 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fmcdw" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.434983 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-fmcdw"] Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.467537 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-txc7n"] Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.469103 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-txc7n" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.489661 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.490192 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.490593 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.491309 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-rw7d7" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.500619 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dd40-account-create-update-kqrht" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.501887 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfx7j\" (UniqueName: \"kubernetes.io/projected/1ec06763-5d93-465b-ade2-557cc5072827-kube-api-access-vfx7j\") pod \"neutron-db-create-fmcdw\" (UID: \"1ec06763-5d93-465b-ade2-557cc5072827\") " pod="openstack/neutron-db-create-fmcdw" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.501969 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ec06763-5d93-465b-ade2-557cc5072827-operator-scripts\") pod \"neutron-db-create-fmcdw\" (UID: \"1ec06763-5d93-465b-ade2-557cc5072827\") " pod="openstack/neutron-db-create-fmcdw" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.502058 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfzcn\" (UniqueName: \"kubernetes.io/projected/36961045-4f23-401f-92a0-2fe30920bdf6-kube-api-access-xfzcn\") pod \"barbican-607a-account-create-update-ktk8n\" (UID: \"36961045-4f23-401f-92a0-2fe30920bdf6\") " pod="openstack/barbican-607a-account-create-update-ktk8n" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.502105 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36961045-4f23-401f-92a0-2fe30920bdf6-operator-scripts\") pod \"barbican-607a-account-create-update-ktk8n\" (UID: \"36961045-4f23-401f-92a0-2fe30920bdf6\") " pod="openstack/barbican-607a-account-create-update-ktk8n" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.537868 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36961045-4f23-401f-92a0-2fe30920bdf6-operator-scripts\") pod \"barbican-607a-account-create-update-ktk8n\" (UID: \"36961045-4f23-401f-92a0-2fe30920bdf6\") " pod="openstack/barbican-607a-account-create-update-ktk8n" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.545347 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-txc7n"] Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.546288 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfzcn\" (UniqueName: \"kubernetes.io/projected/36961045-4f23-401f-92a0-2fe30920bdf6-kube-api-access-xfzcn\") pod \"barbican-607a-account-create-update-ktk8n\" (UID: \"36961045-4f23-401f-92a0-2fe30920bdf6\") " pod="openstack/barbican-607a-account-create-update-ktk8n" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.605798 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78fd8a04-83bd-43d3-9a36-e116ecb3951a-combined-ca-bundle\") pod \"keystone-db-sync-txc7n\" (UID: \"78fd8a04-83bd-43d3-9a36-e116ecb3951a\") " pod="openstack/keystone-db-sync-txc7n" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.605908 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs9gb\" (UniqueName: \"kubernetes.io/projected/78fd8a04-83bd-43d3-9a36-e116ecb3951a-kube-api-access-fs9gb\") pod \"keystone-db-sync-txc7n\" (UID: \"78fd8a04-83bd-43d3-9a36-e116ecb3951a\") " pod="openstack/keystone-db-sync-txc7n" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.605941 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfx7j\" (UniqueName: \"kubernetes.io/projected/1ec06763-5d93-465b-ade2-557cc5072827-kube-api-access-vfx7j\") pod \"neutron-db-create-fmcdw\" (UID: \"1ec06763-5d93-465b-ade2-557cc5072827\") " pod="openstack/neutron-db-create-fmcdw" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.606009 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ec06763-5d93-465b-ade2-557cc5072827-operator-scripts\") pod \"neutron-db-create-fmcdw\" (UID: \"1ec06763-5d93-465b-ade2-557cc5072827\") " pod="openstack/neutron-db-create-fmcdw" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.606052 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78fd8a04-83bd-43d3-9a36-e116ecb3951a-config-data\") pod \"keystone-db-sync-txc7n\" (UID: \"78fd8a04-83bd-43d3-9a36-e116ecb3951a\") " pod="openstack/keystone-db-sync-txc7n" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.611603 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ec06763-5d93-465b-ade2-557cc5072827-operator-scripts\") pod \"neutron-db-create-fmcdw\" (UID: \"1ec06763-5d93-465b-ade2-557cc5072827\") " pod="openstack/neutron-db-create-fmcdw" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.666879 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfx7j\" (UniqueName: \"kubernetes.io/projected/1ec06763-5d93-465b-ade2-557cc5072827-kube-api-access-vfx7j\") pod \"neutron-db-create-fmcdw\" (UID: \"1ec06763-5d93-465b-ade2-557cc5072827\") " pod="openstack/neutron-db-create-fmcdw" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.688751 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-be21-account-create-update-jf2zc"] Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.690103 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-be21-account-create-update-jf2zc" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.695335 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.707230 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs9gb\" (UniqueName: \"kubernetes.io/projected/78fd8a04-83bd-43d3-9a36-e116ecb3951a-kube-api-access-fs9gb\") pod \"keystone-db-sync-txc7n\" (UID: \"78fd8a04-83bd-43d3-9a36-e116ecb3951a\") " pod="openstack/keystone-db-sync-txc7n" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.707342 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78fd8a04-83bd-43d3-9a36-e116ecb3951a-config-data\") pod \"keystone-db-sync-txc7n\" (UID: \"78fd8a04-83bd-43d3-9a36-e116ecb3951a\") " pod="openstack/keystone-db-sync-txc7n" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.707458 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78fd8a04-83bd-43d3-9a36-e116ecb3951a-combined-ca-bundle\") pod \"keystone-db-sync-txc7n\" (UID: \"78fd8a04-83bd-43d3-9a36-e116ecb3951a\") " pod="openstack/keystone-db-sync-txc7n" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.718174 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78fd8a04-83bd-43d3-9a36-e116ecb3951a-combined-ca-bundle\") pod \"keystone-db-sync-txc7n\" (UID: \"78fd8a04-83bd-43d3-9a36-e116ecb3951a\") " pod="openstack/keystone-db-sync-txc7n" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.718672 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fmcdw" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.721874 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78fd8a04-83bd-43d3-9a36-e116ecb3951a-config-data\") pod \"keystone-db-sync-txc7n\" (UID: \"78fd8a04-83bd-43d3-9a36-e116ecb3951a\") " pod="openstack/keystone-db-sync-txc7n" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.750789 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-be21-account-create-update-jf2zc"] Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.771339 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs9gb\" (UniqueName: \"kubernetes.io/projected/78fd8a04-83bd-43d3-9a36-e116ecb3951a-kube-api-access-fs9gb\") pod \"keystone-db-sync-txc7n\" (UID: \"78fd8a04-83bd-43d3-9a36-e116ecb3951a\") " pod="openstack/keystone-db-sync-txc7n" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.809448 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7dcj\" (UniqueName: \"kubernetes.io/projected/075dd640-8e38-4b34-b2fb-437599bbeb08-kube-api-access-t7dcj\") pod \"neutron-be21-account-create-update-jf2zc\" (UID: \"075dd640-8e38-4b34-b2fb-437599bbeb08\") " pod="openstack/neutron-be21-account-create-update-jf2zc" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.812145 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/075dd640-8e38-4b34-b2fb-437599bbeb08-operator-scripts\") pod \"neutron-be21-account-create-update-jf2zc\" (UID: \"075dd640-8e38-4b34-b2fb-437599bbeb08\") " pod="openstack/neutron-be21-account-create-update-jf2zc" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.831126 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-607a-account-create-update-ktk8n" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.913924 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7dcj\" (UniqueName: \"kubernetes.io/projected/075dd640-8e38-4b34-b2fb-437599bbeb08-kube-api-access-t7dcj\") pod \"neutron-be21-account-create-update-jf2zc\" (UID: \"075dd640-8e38-4b34-b2fb-437599bbeb08\") " pod="openstack/neutron-be21-account-create-update-jf2zc" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.914015 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/075dd640-8e38-4b34-b2fb-437599bbeb08-operator-scripts\") pod \"neutron-be21-account-create-update-jf2zc\" (UID: \"075dd640-8e38-4b34-b2fb-437599bbeb08\") " pod="openstack/neutron-be21-account-create-update-jf2zc" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.915439 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/075dd640-8e38-4b34-b2fb-437599bbeb08-operator-scripts\") pod \"neutron-be21-account-create-update-jf2zc\" (UID: \"075dd640-8e38-4b34-b2fb-437599bbeb08\") " pod="openstack/neutron-be21-account-create-update-jf2zc" Jan 31 04:06:22 crc kubenswrapper[4667]: I0131 04:06:22.937657 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7dcj\" (UniqueName: \"kubernetes.io/projected/075dd640-8e38-4b34-b2fb-437599bbeb08-kube-api-access-t7dcj\") pod \"neutron-be21-account-create-update-jf2zc\" (UID: \"075dd640-8e38-4b34-b2fb-437599bbeb08\") " pod="openstack/neutron-be21-account-create-update-jf2zc" Jan 31 04:06:23 crc kubenswrapper[4667]: I0131 04:06:23.047344 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-glth8"] Jan 31 04:06:23 crc kubenswrapper[4667]: I0131 04:06:23.051407 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-txc7n" Jan 31 04:06:23 crc kubenswrapper[4667]: I0131 04:06:23.064629 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-be21-account-create-update-jf2zc" Jan 31 04:06:23 crc kubenswrapper[4667]: I0131 04:06:23.221472 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-dzwfc"] Jan 31 04:06:23 crc kubenswrapper[4667]: W0131 04:06:23.254081 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod007ac74b_f070_45f8_9cf9_1807ec2563f2.slice/crio-46fdd6c10cf24b7bb31548dba1b8a416d568d70500a72feabe1d94b95a154c3a WatchSource:0}: Error finding container 46fdd6c10cf24b7bb31548dba1b8a416d568d70500a72feabe1d94b95a154c3a: Status 404 returned error can't find the container with id 46fdd6c10cf24b7bb31548dba1b8a416d568d70500a72feabe1d94b95a154c3a Jan 31 04:06:23 crc kubenswrapper[4667]: I0131 04:06:23.457638 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-dd40-account-create-update-kqrht"] Jan 31 04:06:23 crc kubenswrapper[4667]: I0131 04:06:23.657993 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-glth8" event={"ID":"120c8a1f-7144-4b39-9040-7ffc70da2eb2","Type":"ContainerStarted","Data":"13fd28495f4f9166f48e515159efcf1d8e72a7ccc5d3502caf2e35e75de27523"} Jan 31 04:06:23 crc kubenswrapper[4667]: I0131 04:06:23.671677 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-dzwfc" event={"ID":"007ac74b-f070-45f8-9cf9-1807ec2563f2","Type":"ContainerStarted","Data":"46fdd6c10cf24b7bb31548dba1b8a416d568d70500a72feabe1d94b95a154c3a"} Jan 31 04:06:23 crc kubenswrapper[4667]: I0131 04:06:23.674439 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dd40-account-create-update-kqrht" event={"ID":"2edbaebb-5022-48a4-82ab-2cb5b23fae97","Type":"ContainerStarted","Data":"f0d407bf16d2032b83916fee0cc4824906781a0de481d0ce329c4b356e5c35e4"} Jan 31 04:06:23 crc kubenswrapper[4667]: I0131 04:06:23.715533 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-glth8" podStartSLOduration=2.715507537 podStartE2EDuration="2.715507537s" podCreationTimestamp="2026-01-31 04:06:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:06:23.713307509 +0000 UTC m=+1107.229642808" watchObservedRunningTime="2026-01-31 04:06:23.715507537 +0000 UTC m=+1107.231842836" Jan 31 04:06:23 crc kubenswrapper[4667]: I0131 04:06:23.765550 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-dzwfc" podStartSLOduration=1.765525273 podStartE2EDuration="1.765525273s" podCreationTimestamp="2026-01-31 04:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:06:23.751626114 +0000 UTC m=+1107.267961413" watchObservedRunningTime="2026-01-31 04:06:23.765525273 +0000 UTC m=+1107.281860572" Jan 31 04:06:23 crc kubenswrapper[4667]: I0131 04:06:23.794528 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-fmcdw"] Jan 31 04:06:23 crc kubenswrapper[4667]: I0131 04:06:23.973564 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-607a-account-create-update-ktk8n"] Jan 31 04:06:24 crc kubenswrapper[4667]: I0131 04:06:24.076415 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-txc7n"] Jan 31 04:06:24 crc kubenswrapper[4667]: I0131 04:06:24.133026 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-be21-account-create-update-jf2zc"] Jan 31 04:06:24 crc kubenswrapper[4667]: I0131 04:06:24.683699 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-be21-account-create-update-jf2zc" event={"ID":"075dd640-8e38-4b34-b2fb-437599bbeb08","Type":"ContainerStarted","Data":"e700e19d2ca5598715b45a8e1325e3ea36ecd48739853706c79597b236a0b2a9"} Jan 31 04:06:24 crc kubenswrapper[4667]: I0131 04:06:24.684050 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-be21-account-create-update-jf2zc" event={"ID":"075dd640-8e38-4b34-b2fb-437599bbeb08","Type":"ContainerStarted","Data":"ff8a74d451d2d28acb49a36a3de09386e61d37e23d5273b9365ce3a1b59e931d"} Jan 31 04:06:24 crc kubenswrapper[4667]: I0131 04:06:24.687129 4667 generic.go:334] "Generic (PLEG): container finished" podID="1ec06763-5d93-465b-ade2-557cc5072827" containerID="0ad35232b5f12cc5e741509fe5b5e67f9edcd05adc462628411a16d0ba90f272" exitCode=0 Jan 31 04:06:24 crc kubenswrapper[4667]: I0131 04:06:24.687189 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fmcdw" event={"ID":"1ec06763-5d93-465b-ade2-557cc5072827","Type":"ContainerDied","Data":"0ad35232b5f12cc5e741509fe5b5e67f9edcd05adc462628411a16d0ba90f272"} Jan 31 04:06:24 crc kubenswrapper[4667]: I0131 04:06:24.687252 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fmcdw" event={"ID":"1ec06763-5d93-465b-ade2-557cc5072827","Type":"ContainerStarted","Data":"a5e32c0035e9b99291e7dce306b9131ddf0abb187285ce8cfb4c75a4bcc6e78f"} Jan 31 04:06:24 crc kubenswrapper[4667]: I0131 04:06:24.690334 4667 generic.go:334] "Generic (PLEG): container finished" podID="007ac74b-f070-45f8-9cf9-1807ec2563f2" containerID="8f526d3fc408b027b2984bb6a4129f491e7482d1311b67474d228372ac47d7e0" exitCode=0 Jan 31 04:06:24 crc kubenswrapper[4667]: I0131 04:06:24.690412 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-dzwfc" event={"ID":"007ac74b-f070-45f8-9cf9-1807ec2563f2","Type":"ContainerDied","Data":"8f526d3fc408b027b2984bb6a4129f491e7482d1311b67474d228372ac47d7e0"} Jan 31 04:06:24 crc kubenswrapper[4667]: I0131 04:06:24.691856 4667 generic.go:334] "Generic (PLEG): container finished" podID="2edbaebb-5022-48a4-82ab-2cb5b23fae97" containerID="2b8a1c25c88ebf53471e593f87e8aceaf28ff19cc1ab03ea3a241d6cd5619274" exitCode=0 Jan 31 04:06:24 crc kubenswrapper[4667]: I0131 04:06:24.691935 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dd40-account-create-update-kqrht" event={"ID":"2edbaebb-5022-48a4-82ab-2cb5b23fae97","Type":"ContainerDied","Data":"2b8a1c25c88ebf53471e593f87e8aceaf28ff19cc1ab03ea3a241d6cd5619274"} Jan 31 04:06:24 crc kubenswrapper[4667]: I0131 04:06:24.693429 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-txc7n" event={"ID":"78fd8a04-83bd-43d3-9a36-e116ecb3951a","Type":"ContainerStarted","Data":"93fe4eba14fa1d99bd74728aaacb76bba29f925ec2f0633726ec65b20c9445e8"} Jan 31 04:06:24 crc kubenswrapper[4667]: I0131 04:06:24.695863 4667 generic.go:334] "Generic (PLEG): container finished" podID="36961045-4f23-401f-92a0-2fe30920bdf6" containerID="58ca5e7bd87e3f545c7358c0d9d32b5178f7946fa0a397d40b132f23ec981a5e" exitCode=0 Jan 31 04:06:24 crc kubenswrapper[4667]: I0131 04:06:24.695974 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-607a-account-create-update-ktk8n" event={"ID":"36961045-4f23-401f-92a0-2fe30920bdf6","Type":"ContainerDied","Data":"58ca5e7bd87e3f545c7358c0d9d32b5178f7946fa0a397d40b132f23ec981a5e"} Jan 31 04:06:24 crc kubenswrapper[4667]: I0131 04:06:24.696010 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-607a-account-create-update-ktk8n" event={"ID":"36961045-4f23-401f-92a0-2fe30920bdf6","Type":"ContainerStarted","Data":"5ab086579aac5bd57cb3d81c31302026a3b1b03e29aa671856423eeb7566b6f3"} Jan 31 04:06:24 crc kubenswrapper[4667]: I0131 04:06:24.697585 4667 generic.go:334] "Generic (PLEG): container finished" podID="120c8a1f-7144-4b39-9040-7ffc70da2eb2" containerID="41a9455c5718466cb3f2d2f81ba67e00d4664bf2285080baa0af12463207492a" exitCode=0 Jan 31 04:06:24 crc kubenswrapper[4667]: I0131 04:06:24.697639 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-glth8" event={"ID":"120c8a1f-7144-4b39-9040-7ffc70da2eb2","Type":"ContainerDied","Data":"41a9455c5718466cb3f2d2f81ba67e00d4664bf2285080baa0af12463207492a"} Jan 31 04:06:24 crc kubenswrapper[4667]: I0131 04:06:24.710110 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-be21-account-create-update-jf2zc" podStartSLOduration=2.710086383 podStartE2EDuration="2.710086383s" podCreationTimestamp="2026-01-31 04:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:06:24.700995142 +0000 UTC m=+1108.217330441" watchObservedRunningTime="2026-01-31 04:06:24.710086383 +0000 UTC m=+1108.226421682" Jan 31 04:06:25 crc kubenswrapper[4667]: E0131 04:06:25.022523 4667 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.111:34666->38.102.83.111:37867: write tcp 38.102.83.111:34666->38.102.83.111:37867: write: broken pipe Jan 31 04:06:25 crc kubenswrapper[4667]: I0131 04:06:25.715703 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9mmhm" event={"ID":"959a81ea-7cf7-4fc4-b84d-14699d4e6bb4","Type":"ContainerStarted","Data":"f0fbfb66c2cd178083036c05278c819f4f045ef896b9882c36534e16f0433fc5"} Jan 31 04:06:25 crc kubenswrapper[4667]: I0131 04:06:25.721937 4667 generic.go:334] "Generic (PLEG): container finished" podID="075dd640-8e38-4b34-b2fb-437599bbeb08" containerID="e700e19d2ca5598715b45a8e1325e3ea36ecd48739853706c79597b236a0b2a9" exitCode=0 Jan 31 04:06:25 crc kubenswrapper[4667]: I0131 04:06:25.722070 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-be21-account-create-update-jf2zc" event={"ID":"075dd640-8e38-4b34-b2fb-437599bbeb08","Type":"ContainerDied","Data":"e700e19d2ca5598715b45a8e1325e3ea36ecd48739853706c79597b236a0b2a9"} Jan 31 04:06:25 crc kubenswrapper[4667]: I0131 04:06:25.757177 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-9mmhm" podStartSLOduration=5.424852666 podStartE2EDuration="41.75715291s" podCreationTimestamp="2026-01-31 04:05:44 +0000 UTC" firstStartedPulling="2026-01-31 04:05:47.531806061 +0000 UTC m=+1071.048141360" lastFinishedPulling="2026-01-31 04:06:23.864106305 +0000 UTC m=+1107.380441604" observedRunningTime="2026-01-31 04:06:25.755800084 +0000 UTC m=+1109.272135383" watchObservedRunningTime="2026-01-31 04:06:25.75715291 +0000 UTC m=+1109.273488209" Jan 31 04:06:26 crc kubenswrapper[4667]: I0131 04:06:26.381917 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-glth8" Jan 31 04:06:26 crc kubenswrapper[4667]: I0131 04:06:26.498683 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/120c8a1f-7144-4b39-9040-7ffc70da2eb2-operator-scripts\") pod \"120c8a1f-7144-4b39-9040-7ffc70da2eb2\" (UID: \"120c8a1f-7144-4b39-9040-7ffc70da2eb2\") " Jan 31 04:06:26 crc kubenswrapper[4667]: I0131 04:06:26.499367 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzcg8\" (UniqueName: \"kubernetes.io/projected/120c8a1f-7144-4b39-9040-7ffc70da2eb2-kube-api-access-rzcg8\") pod \"120c8a1f-7144-4b39-9040-7ffc70da2eb2\" (UID: \"120c8a1f-7144-4b39-9040-7ffc70da2eb2\") " Jan 31 04:06:26 crc kubenswrapper[4667]: I0131 04:06:26.500179 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/120c8a1f-7144-4b39-9040-7ffc70da2eb2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "120c8a1f-7144-4b39-9040-7ffc70da2eb2" (UID: "120c8a1f-7144-4b39-9040-7ffc70da2eb2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:06:26 crc kubenswrapper[4667]: I0131 04:06:26.533362 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/120c8a1f-7144-4b39-9040-7ffc70da2eb2-kube-api-access-rzcg8" (OuterVolumeSpecName: "kube-api-access-rzcg8") pod "120c8a1f-7144-4b39-9040-7ffc70da2eb2" (UID: "120c8a1f-7144-4b39-9040-7ffc70da2eb2"). InnerVolumeSpecName "kube-api-access-rzcg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:26.600740 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzcg8\" (UniqueName: \"kubernetes.io/projected/120c8a1f-7144-4b39-9040-7ffc70da2eb2-kube-api-access-rzcg8\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:26.600776 4667 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/120c8a1f-7144-4b39-9040-7ffc70da2eb2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:26.687340 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dd40-account-create-update-kqrht" Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:26.694719 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dzwfc" Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:26.710360 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-607a-account-create-update-ktk8n" Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:26.710546 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fmcdw" Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:26.733027 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fmcdw" event={"ID":"1ec06763-5d93-465b-ade2-557cc5072827","Type":"ContainerDied","Data":"a5e32c0035e9b99291e7dce306b9131ddf0abb187285ce8cfb4c75a4bcc6e78f"} Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:26.733060 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5e32c0035e9b99291e7dce306b9131ddf0abb187285ce8cfb4c75a4bcc6e78f" Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:26.733117 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fmcdw" Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:26.738753 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-dzwfc" event={"ID":"007ac74b-f070-45f8-9cf9-1807ec2563f2","Type":"ContainerDied","Data":"46fdd6c10cf24b7bb31548dba1b8a416d568d70500a72feabe1d94b95a154c3a"} Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:26.738803 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46fdd6c10cf24b7bb31548dba1b8a416d568d70500a72feabe1d94b95a154c3a" Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:26.738909 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-dzwfc" Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:26.752048 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dd40-account-create-update-kqrht" event={"ID":"2edbaebb-5022-48a4-82ab-2cb5b23fae97","Type":"ContainerDied","Data":"f0d407bf16d2032b83916fee0cc4824906781a0de481d0ce329c4b356e5c35e4"} Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:26.752099 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0d407bf16d2032b83916fee0cc4824906781a0de481d0ce329c4b356e5c35e4" Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:26.752178 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dd40-account-create-update-kqrht" Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:26.760968 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-607a-account-create-update-ktk8n" event={"ID":"36961045-4f23-401f-92a0-2fe30920bdf6","Type":"ContainerDied","Data":"5ab086579aac5bd57cb3d81c31302026a3b1b03e29aa671856423eeb7566b6f3"} Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:26.761022 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ab086579aac5bd57cb3d81c31302026a3b1b03e29aa671856423eeb7566b6f3" Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:26.761096 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-607a-account-create-update-ktk8n" Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:26.763112 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-glth8" Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:26.763875 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-glth8" event={"ID":"120c8a1f-7144-4b39-9040-7ffc70da2eb2","Type":"ContainerDied","Data":"13fd28495f4f9166f48e515159efcf1d8e72a7ccc5d3502caf2e35e75de27523"} Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:26.763919 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13fd28495f4f9166f48e515159efcf1d8e72a7ccc5d3502caf2e35e75de27523" Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:26.805560 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fl2nn\" (UniqueName: \"kubernetes.io/projected/007ac74b-f070-45f8-9cf9-1807ec2563f2-kube-api-access-fl2nn\") pod \"007ac74b-f070-45f8-9cf9-1807ec2563f2\" (UID: \"007ac74b-f070-45f8-9cf9-1807ec2563f2\") " Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:26.805633 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkmjz\" (UniqueName: \"kubernetes.io/projected/2edbaebb-5022-48a4-82ab-2cb5b23fae97-kube-api-access-fkmjz\") pod \"2edbaebb-5022-48a4-82ab-2cb5b23fae97\" (UID: \"2edbaebb-5022-48a4-82ab-2cb5b23fae97\") " Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:26.805693 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2edbaebb-5022-48a4-82ab-2cb5b23fae97-operator-scripts\") pod \"2edbaebb-5022-48a4-82ab-2cb5b23fae97\" (UID: \"2edbaebb-5022-48a4-82ab-2cb5b23fae97\") " Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:26.805768 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/007ac74b-f070-45f8-9cf9-1807ec2563f2-operator-scripts\") pod \"007ac74b-f070-45f8-9cf9-1807ec2563f2\" (UID: \"007ac74b-f070-45f8-9cf9-1807ec2563f2\") " Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:26.806724 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2edbaebb-5022-48a4-82ab-2cb5b23fae97-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2edbaebb-5022-48a4-82ab-2cb5b23fae97" (UID: "2edbaebb-5022-48a4-82ab-2cb5b23fae97"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:26.808347 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/007ac74b-f070-45f8-9cf9-1807ec2563f2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "007ac74b-f070-45f8-9cf9-1807ec2563f2" (UID: "007ac74b-f070-45f8-9cf9-1807ec2563f2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:26.811787 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/007ac74b-f070-45f8-9cf9-1807ec2563f2-kube-api-access-fl2nn" (OuterVolumeSpecName: "kube-api-access-fl2nn") pod "007ac74b-f070-45f8-9cf9-1807ec2563f2" (UID: "007ac74b-f070-45f8-9cf9-1807ec2563f2"). InnerVolumeSpecName "kube-api-access-fl2nn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:26.812855 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2edbaebb-5022-48a4-82ab-2cb5b23fae97-kube-api-access-fkmjz" (OuterVolumeSpecName: "kube-api-access-fkmjz") pod "2edbaebb-5022-48a4-82ab-2cb5b23fae97" (UID: "2edbaebb-5022-48a4-82ab-2cb5b23fae97"). InnerVolumeSpecName "kube-api-access-fkmjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:26.908261 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfzcn\" (UniqueName: \"kubernetes.io/projected/36961045-4f23-401f-92a0-2fe30920bdf6-kube-api-access-xfzcn\") pod \"36961045-4f23-401f-92a0-2fe30920bdf6\" (UID: \"36961045-4f23-401f-92a0-2fe30920bdf6\") " Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:26.908349 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36961045-4f23-401f-92a0-2fe30920bdf6-operator-scripts\") pod \"36961045-4f23-401f-92a0-2fe30920bdf6\" (UID: \"36961045-4f23-401f-92a0-2fe30920bdf6\") " Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:26.908495 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfx7j\" (UniqueName: \"kubernetes.io/projected/1ec06763-5d93-465b-ade2-557cc5072827-kube-api-access-vfx7j\") pod \"1ec06763-5d93-465b-ade2-557cc5072827\" (UID: \"1ec06763-5d93-465b-ade2-557cc5072827\") " Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:26.908622 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ec06763-5d93-465b-ade2-557cc5072827-operator-scripts\") pod \"1ec06763-5d93-465b-ade2-557cc5072827\" (UID: \"1ec06763-5d93-465b-ade2-557cc5072827\") " Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:26.909026 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fl2nn\" (UniqueName: \"kubernetes.io/projected/007ac74b-f070-45f8-9cf9-1807ec2563f2-kube-api-access-fl2nn\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:26.909037 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkmjz\" (UniqueName: \"kubernetes.io/projected/2edbaebb-5022-48a4-82ab-2cb5b23fae97-kube-api-access-fkmjz\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:26.909049 4667 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2edbaebb-5022-48a4-82ab-2cb5b23fae97-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:26.909059 4667 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/007ac74b-f070-45f8-9cf9-1807ec2563f2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:26.909665 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36961045-4f23-401f-92a0-2fe30920bdf6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "36961045-4f23-401f-92a0-2fe30920bdf6" (UID: "36961045-4f23-401f-92a0-2fe30920bdf6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:26.911916 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ec06763-5d93-465b-ade2-557cc5072827-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1ec06763-5d93-465b-ade2-557cc5072827" (UID: "1ec06763-5d93-465b-ade2-557cc5072827"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:26.912364 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36961045-4f23-401f-92a0-2fe30920bdf6-kube-api-access-xfzcn" (OuterVolumeSpecName: "kube-api-access-xfzcn") pod "36961045-4f23-401f-92a0-2fe30920bdf6" (UID: "36961045-4f23-401f-92a0-2fe30920bdf6"). InnerVolumeSpecName "kube-api-access-xfzcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:26.914520 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ec06763-5d93-465b-ade2-557cc5072827-kube-api-access-vfx7j" (OuterVolumeSpecName: "kube-api-access-vfx7j") pod "1ec06763-5d93-465b-ade2-557cc5072827" (UID: "1ec06763-5d93-465b-ade2-557cc5072827"). InnerVolumeSpecName "kube-api-access-vfx7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:27.010647 4667 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36961045-4f23-401f-92a0-2fe30920bdf6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:27.010690 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfx7j\" (UniqueName: \"kubernetes.io/projected/1ec06763-5d93-465b-ade2-557cc5072827-kube-api-access-vfx7j\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:27.010701 4667 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ec06763-5d93-465b-ade2-557cc5072827-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:27.010710 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfzcn\" (UniqueName: \"kubernetes.io/projected/36961045-4f23-401f-92a0-2fe30920bdf6-kube-api-access-xfzcn\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:27.116966 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-be21-account-create-update-jf2zc" Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:27.319562 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7dcj\" (UniqueName: \"kubernetes.io/projected/075dd640-8e38-4b34-b2fb-437599bbeb08-kube-api-access-t7dcj\") pod \"075dd640-8e38-4b34-b2fb-437599bbeb08\" (UID: \"075dd640-8e38-4b34-b2fb-437599bbeb08\") " Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:27.321069 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/075dd640-8e38-4b34-b2fb-437599bbeb08-operator-scripts\") pod \"075dd640-8e38-4b34-b2fb-437599bbeb08\" (UID: \"075dd640-8e38-4b34-b2fb-437599bbeb08\") " Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:27.321688 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/075dd640-8e38-4b34-b2fb-437599bbeb08-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "075dd640-8e38-4b34-b2fb-437599bbeb08" (UID: "075dd640-8e38-4b34-b2fb-437599bbeb08"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:27.322267 4667 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/075dd640-8e38-4b34-b2fb-437599bbeb08-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:27.336705 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/075dd640-8e38-4b34-b2fb-437599bbeb08-kube-api-access-t7dcj" (OuterVolumeSpecName: "kube-api-access-t7dcj") pod "075dd640-8e38-4b34-b2fb-437599bbeb08" (UID: "075dd640-8e38-4b34-b2fb-437599bbeb08"). InnerVolumeSpecName "kube-api-access-t7dcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:27.423488 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7dcj\" (UniqueName: \"kubernetes.io/projected/075dd640-8e38-4b34-b2fb-437599bbeb08-kube-api-access-t7dcj\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:27.777968 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-be21-account-create-update-jf2zc" event={"ID":"075dd640-8e38-4b34-b2fb-437599bbeb08","Type":"ContainerDied","Data":"ff8a74d451d2d28acb49a36a3de09386e61d37e23d5273b9365ce3a1b59e931d"} Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:27.778287 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff8a74d451d2d28acb49a36a3de09386e61d37e23d5273b9365ce3a1b59e931d" Jan 31 04:06:27 crc kubenswrapper[4667]: I0131 04:06:27.778440 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-be21-account-create-update-jf2zc" Jan 31 04:06:29 crc kubenswrapper[4667]: I0131 04:06:29.089560 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-764c5664d7-9n7xp" Jan 31 04:06:29 crc kubenswrapper[4667]: I0131 04:06:29.147468 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-gdsvg"] Jan 31 04:06:29 crc kubenswrapper[4667]: I0131 04:06:29.147745 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-gdsvg" podUID="eb505db2-7884-475e-9bed-884650bfaeb8" containerName="dnsmasq-dns" containerID="cri-o://b04549fa0f73592bcbd1e3591ef899631912f897fb9e301b70a85041e6b82237" gracePeriod=10 Jan 31 04:06:29 crc kubenswrapper[4667]: I0131 04:06:29.799857 4667 generic.go:334] "Generic (PLEG): container finished" podID="eb505db2-7884-475e-9bed-884650bfaeb8" containerID="b04549fa0f73592bcbd1e3591ef899631912f897fb9e301b70a85041e6b82237" exitCode=0 Jan 31 04:06:29 crc kubenswrapper[4667]: I0131 04:06:29.799914 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-gdsvg" event={"ID":"eb505db2-7884-475e-9bed-884650bfaeb8","Type":"ContainerDied","Data":"b04549fa0f73592bcbd1e3591ef899631912f897fb9e301b70a85041e6b82237"} Jan 31 04:06:31 crc kubenswrapper[4667]: I0131 04:06:31.571684 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-gdsvg" podUID="eb505db2-7884-475e-9bed-884650bfaeb8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.118:5353: connect: connection refused" Jan 31 04:06:32 crc kubenswrapper[4667]: I0131 04:06:32.086239 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-gdsvg" Jan 31 04:06:32 crc kubenswrapper[4667]: I0131 04:06:32.227556 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb505db2-7884-475e-9bed-884650bfaeb8-ovsdbserver-nb\") pod \"eb505db2-7884-475e-9bed-884650bfaeb8\" (UID: \"eb505db2-7884-475e-9bed-884650bfaeb8\") " Jan 31 04:06:32 crc kubenswrapper[4667]: I0131 04:06:32.228197 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb505db2-7884-475e-9bed-884650bfaeb8-dns-svc\") pod \"eb505db2-7884-475e-9bed-884650bfaeb8\" (UID: \"eb505db2-7884-475e-9bed-884650bfaeb8\") " Jan 31 04:06:32 crc kubenswrapper[4667]: I0131 04:06:32.228256 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb505db2-7884-475e-9bed-884650bfaeb8-ovsdbserver-sb\") pod \"eb505db2-7884-475e-9bed-884650bfaeb8\" (UID: \"eb505db2-7884-475e-9bed-884650bfaeb8\") " Jan 31 04:06:32 crc kubenswrapper[4667]: I0131 04:06:32.228325 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjk58\" (UniqueName: \"kubernetes.io/projected/eb505db2-7884-475e-9bed-884650bfaeb8-kube-api-access-gjk58\") pod \"eb505db2-7884-475e-9bed-884650bfaeb8\" (UID: \"eb505db2-7884-475e-9bed-884650bfaeb8\") " Jan 31 04:06:32 crc kubenswrapper[4667]: I0131 04:06:32.228369 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb505db2-7884-475e-9bed-884650bfaeb8-config\") pod \"eb505db2-7884-475e-9bed-884650bfaeb8\" (UID: \"eb505db2-7884-475e-9bed-884650bfaeb8\") " Jan 31 04:06:32 crc kubenswrapper[4667]: I0131 04:06:32.236821 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb505db2-7884-475e-9bed-884650bfaeb8-kube-api-access-gjk58" (OuterVolumeSpecName: "kube-api-access-gjk58") pod "eb505db2-7884-475e-9bed-884650bfaeb8" (UID: "eb505db2-7884-475e-9bed-884650bfaeb8"). InnerVolumeSpecName "kube-api-access-gjk58". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:06:32 crc kubenswrapper[4667]: I0131 04:06:32.270549 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb505db2-7884-475e-9bed-884650bfaeb8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eb505db2-7884-475e-9bed-884650bfaeb8" (UID: "eb505db2-7884-475e-9bed-884650bfaeb8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:06:32 crc kubenswrapper[4667]: I0131 04:06:32.274944 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb505db2-7884-475e-9bed-884650bfaeb8-config" (OuterVolumeSpecName: "config") pod "eb505db2-7884-475e-9bed-884650bfaeb8" (UID: "eb505db2-7884-475e-9bed-884650bfaeb8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:06:32 crc kubenswrapper[4667]: I0131 04:06:32.286258 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb505db2-7884-475e-9bed-884650bfaeb8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eb505db2-7884-475e-9bed-884650bfaeb8" (UID: "eb505db2-7884-475e-9bed-884650bfaeb8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:06:32 crc kubenswrapper[4667]: I0131 04:06:32.298296 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb505db2-7884-475e-9bed-884650bfaeb8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "eb505db2-7884-475e-9bed-884650bfaeb8" (UID: "eb505db2-7884-475e-9bed-884650bfaeb8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:06:32 crc kubenswrapper[4667]: I0131 04:06:32.331571 4667 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb505db2-7884-475e-9bed-884650bfaeb8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:32 crc kubenswrapper[4667]: I0131 04:06:32.331612 4667 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb505db2-7884-475e-9bed-884650bfaeb8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:32 crc kubenswrapper[4667]: I0131 04:06:32.331654 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjk58\" (UniqueName: \"kubernetes.io/projected/eb505db2-7884-475e-9bed-884650bfaeb8-kube-api-access-gjk58\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:32 crc kubenswrapper[4667]: I0131 04:06:32.331669 4667 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb505db2-7884-475e-9bed-884650bfaeb8-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:32 crc kubenswrapper[4667]: I0131 04:06:32.331682 4667 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb505db2-7884-475e-9bed-884650bfaeb8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:32 crc kubenswrapper[4667]: I0131 04:06:32.875765 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-txc7n" event={"ID":"78fd8a04-83bd-43d3-9a36-e116ecb3951a","Type":"ContainerStarted","Data":"49a2e52ab6872fb3a86d88661813b708e00bc2f970f66d3202030bec584e4a8d"} Jan 31 04:06:32 crc kubenswrapper[4667]: I0131 04:06:32.878755 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-gdsvg" event={"ID":"eb505db2-7884-475e-9bed-884650bfaeb8","Type":"ContainerDied","Data":"42ab997c8a1966ee6ddda509f79a829a6048e746fcdd232cb1a32a65229d8a92"} Jan 31 04:06:32 crc kubenswrapper[4667]: I0131 04:06:32.878807 4667 scope.go:117] "RemoveContainer" containerID="b04549fa0f73592bcbd1e3591ef899631912f897fb9e301b70a85041e6b82237" Jan 31 04:06:32 crc kubenswrapper[4667]: I0131 04:06:32.878960 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-gdsvg" Jan 31 04:06:32 crc kubenswrapper[4667]: I0131 04:06:32.902551 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-txc7n" podStartSLOduration=3.12204325 podStartE2EDuration="10.902527606s" podCreationTimestamp="2026-01-31 04:06:22 +0000 UTC" firstStartedPulling="2026-01-31 04:06:24.083272493 +0000 UTC m=+1107.599607782" lastFinishedPulling="2026-01-31 04:06:31.863756849 +0000 UTC m=+1115.380092138" observedRunningTime="2026-01-31 04:06:32.897234116 +0000 UTC m=+1116.413569435" watchObservedRunningTime="2026-01-31 04:06:32.902527606 +0000 UTC m=+1116.418862905" Jan 31 04:06:32 crc kubenswrapper[4667]: I0131 04:06:32.910642 4667 scope.go:117] "RemoveContainer" containerID="4d655e58221932b0880d695fdce67a072fff6ed212e41e7e646ca0f10cdcccc0" Jan 31 04:06:32 crc kubenswrapper[4667]: I0131 04:06:32.933966 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-gdsvg"] Jan 31 04:06:32 crc kubenswrapper[4667]: I0131 04:06:32.955847 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-gdsvg"] Jan 31 04:06:33 crc kubenswrapper[4667]: I0131 04:06:33.295996 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb505db2-7884-475e-9bed-884650bfaeb8" path="/var/lib/kubelet/pods/eb505db2-7884-475e-9bed-884650bfaeb8/volumes" Jan 31 04:06:34 crc kubenswrapper[4667]: I0131 04:06:34.902156 4667 generic.go:334] "Generic (PLEG): container finished" podID="959a81ea-7cf7-4fc4-b84d-14699d4e6bb4" containerID="f0fbfb66c2cd178083036c05278c819f4f045ef896b9882c36534e16f0433fc5" exitCode=0 Jan 31 04:06:34 crc kubenswrapper[4667]: I0131 04:06:34.902219 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9mmhm" event={"ID":"959a81ea-7cf7-4fc4-b84d-14699d4e6bb4","Type":"ContainerDied","Data":"f0fbfb66c2cd178083036c05278c819f4f045ef896b9882c36534e16f0433fc5"} Jan 31 04:06:35 crc kubenswrapper[4667]: E0131 04:06:35.645107 4667 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78fd8a04_83bd_43d3_9a36_e116ecb3951a.slice/crio-49a2e52ab6872fb3a86d88661813b708e00bc2f970f66d3202030bec584e4a8d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78fd8a04_83bd_43d3_9a36_e116ecb3951a.slice/crio-conmon-49a2e52ab6872fb3a86d88661813b708e00bc2f970f66d3202030bec584e4a8d.scope\": RecentStats: unable to find data in memory cache]" Jan 31 04:06:35 crc kubenswrapper[4667]: I0131 04:06:35.914162 4667 generic.go:334] "Generic (PLEG): container finished" podID="78fd8a04-83bd-43d3-9a36-e116ecb3951a" containerID="49a2e52ab6872fb3a86d88661813b708e00bc2f970f66d3202030bec584e4a8d" exitCode=0 Jan 31 04:06:35 crc kubenswrapper[4667]: I0131 04:06:35.914290 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-txc7n" event={"ID":"78fd8a04-83bd-43d3-9a36-e116ecb3951a","Type":"ContainerDied","Data":"49a2e52ab6872fb3a86d88661813b708e00bc2f970f66d3202030bec584e4a8d"} Jan 31 04:06:36 crc kubenswrapper[4667]: I0131 04:06:36.333464 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9mmhm" Jan 31 04:06:36 crc kubenswrapper[4667]: I0131 04:06:36.411589 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/959a81ea-7cf7-4fc4-b84d-14699d4e6bb4-config-data\") pod \"959a81ea-7cf7-4fc4-b84d-14699d4e6bb4\" (UID: \"959a81ea-7cf7-4fc4-b84d-14699d4e6bb4\") " Jan 31 04:06:36 crc kubenswrapper[4667]: I0131 04:06:36.411695 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhcl4\" (UniqueName: \"kubernetes.io/projected/959a81ea-7cf7-4fc4-b84d-14699d4e6bb4-kube-api-access-mhcl4\") pod \"959a81ea-7cf7-4fc4-b84d-14699d4e6bb4\" (UID: \"959a81ea-7cf7-4fc4-b84d-14699d4e6bb4\") " Jan 31 04:06:36 crc kubenswrapper[4667]: I0131 04:06:36.411773 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/959a81ea-7cf7-4fc4-b84d-14699d4e6bb4-db-sync-config-data\") pod \"959a81ea-7cf7-4fc4-b84d-14699d4e6bb4\" (UID: \"959a81ea-7cf7-4fc4-b84d-14699d4e6bb4\") " Jan 31 04:06:36 crc kubenswrapper[4667]: I0131 04:06:36.411948 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/959a81ea-7cf7-4fc4-b84d-14699d4e6bb4-combined-ca-bundle\") pod \"959a81ea-7cf7-4fc4-b84d-14699d4e6bb4\" (UID: \"959a81ea-7cf7-4fc4-b84d-14699d4e6bb4\") " Jan 31 04:06:36 crc kubenswrapper[4667]: I0131 04:06:36.423792 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/959a81ea-7cf7-4fc4-b84d-14699d4e6bb4-kube-api-access-mhcl4" (OuterVolumeSpecName: "kube-api-access-mhcl4") pod "959a81ea-7cf7-4fc4-b84d-14699d4e6bb4" (UID: "959a81ea-7cf7-4fc4-b84d-14699d4e6bb4"). InnerVolumeSpecName "kube-api-access-mhcl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:06:36 crc kubenswrapper[4667]: I0131 04:06:36.424242 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/959a81ea-7cf7-4fc4-b84d-14699d4e6bb4-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "959a81ea-7cf7-4fc4-b84d-14699d4e6bb4" (UID: "959a81ea-7cf7-4fc4-b84d-14699d4e6bb4"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:36 crc kubenswrapper[4667]: I0131 04:06:36.439818 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/959a81ea-7cf7-4fc4-b84d-14699d4e6bb4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "959a81ea-7cf7-4fc4-b84d-14699d4e6bb4" (UID: "959a81ea-7cf7-4fc4-b84d-14699d4e6bb4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:36 crc kubenswrapper[4667]: I0131 04:06:36.472466 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/959a81ea-7cf7-4fc4-b84d-14699d4e6bb4-config-data" (OuterVolumeSpecName: "config-data") pod "959a81ea-7cf7-4fc4-b84d-14699d4e6bb4" (UID: "959a81ea-7cf7-4fc4-b84d-14699d4e6bb4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:36 crc kubenswrapper[4667]: I0131 04:06:36.514760 4667 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/959a81ea-7cf7-4fc4-b84d-14699d4e6bb4-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:36 crc kubenswrapper[4667]: I0131 04:06:36.514823 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhcl4\" (UniqueName: \"kubernetes.io/projected/959a81ea-7cf7-4fc4-b84d-14699d4e6bb4-kube-api-access-mhcl4\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:36 crc kubenswrapper[4667]: I0131 04:06:36.514840 4667 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/959a81ea-7cf7-4fc4-b84d-14699d4e6bb4-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:36 crc kubenswrapper[4667]: I0131 04:06:36.514937 4667 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/959a81ea-7cf7-4fc4-b84d-14699d4e6bb4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:36 crc kubenswrapper[4667]: I0131 04:06:36.924350 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9mmhm" event={"ID":"959a81ea-7cf7-4fc4-b84d-14699d4e6bb4","Type":"ContainerDied","Data":"16279b83a5c455921feb7d9450a09dc88f20a0fd3db6f0a588f6a3e7cb8fd5bf"} Jan 31 04:06:36 crc kubenswrapper[4667]: I0131 04:06:36.924794 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16279b83a5c455921feb7d9450a09dc88f20a0fd3db6f0a588f6a3e7cb8fd5bf" Jan 31 04:06:36 crc kubenswrapper[4667]: I0131 04:06:36.924365 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9mmhm" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.193180 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-txc7n" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.330676 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78fd8a04-83bd-43d3-9a36-e116ecb3951a-config-data\") pod \"78fd8a04-83bd-43d3-9a36-e116ecb3951a\" (UID: \"78fd8a04-83bd-43d3-9a36-e116ecb3951a\") " Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.330788 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78fd8a04-83bd-43d3-9a36-e116ecb3951a-combined-ca-bundle\") pod \"78fd8a04-83bd-43d3-9a36-e116ecb3951a\" (UID: \"78fd8a04-83bd-43d3-9a36-e116ecb3951a\") " Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.330875 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fs9gb\" (UniqueName: \"kubernetes.io/projected/78fd8a04-83bd-43d3-9a36-e116ecb3951a-kube-api-access-fs9gb\") pod \"78fd8a04-83bd-43d3-9a36-e116ecb3951a\" (UID: \"78fd8a04-83bd-43d3-9a36-e116ecb3951a\") " Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.354833 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78fd8a04-83bd-43d3-9a36-e116ecb3951a-kube-api-access-fs9gb" (OuterVolumeSpecName: "kube-api-access-fs9gb") pod "78fd8a04-83bd-43d3-9a36-e116ecb3951a" (UID: "78fd8a04-83bd-43d3-9a36-e116ecb3951a"). InnerVolumeSpecName "kube-api-access-fs9gb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.413222 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78fd8a04-83bd-43d3-9a36-e116ecb3951a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "78fd8a04-83bd-43d3-9a36-e116ecb3951a" (UID: "78fd8a04-83bd-43d3-9a36-e116ecb3951a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.434108 4667 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78fd8a04-83bd-43d3-9a36-e116ecb3951a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.434145 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fs9gb\" (UniqueName: \"kubernetes.io/projected/78fd8a04-83bd-43d3-9a36-e116ecb3951a-kube-api-access-fs9gb\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.441198 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-kk7qf"] Jan 31 04:06:37 crc kubenswrapper[4667]: E0131 04:06:37.441675 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ec06763-5d93-465b-ade2-557cc5072827" containerName="mariadb-database-create" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.441694 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ec06763-5d93-465b-ade2-557cc5072827" containerName="mariadb-database-create" Jan 31 04:06:37 crc kubenswrapper[4667]: E0131 04:06:37.441727 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="120c8a1f-7144-4b39-9040-7ffc70da2eb2" containerName="mariadb-database-create" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.441734 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="120c8a1f-7144-4b39-9040-7ffc70da2eb2" containerName="mariadb-database-create" Jan 31 04:06:37 crc kubenswrapper[4667]: E0131 04:06:37.441748 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="075dd640-8e38-4b34-b2fb-437599bbeb08" containerName="mariadb-account-create-update" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.441754 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="075dd640-8e38-4b34-b2fb-437599bbeb08" containerName="mariadb-account-create-update" Jan 31 04:06:37 crc kubenswrapper[4667]: E0131 04:06:37.441762 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36961045-4f23-401f-92a0-2fe30920bdf6" containerName="mariadb-account-create-update" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.441768 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="36961045-4f23-401f-92a0-2fe30920bdf6" containerName="mariadb-account-create-update" Jan 31 04:06:37 crc kubenswrapper[4667]: E0131 04:06:37.441778 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="007ac74b-f070-45f8-9cf9-1807ec2563f2" containerName="mariadb-database-create" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.441796 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="007ac74b-f070-45f8-9cf9-1807ec2563f2" containerName="mariadb-database-create" Jan 31 04:06:37 crc kubenswrapper[4667]: E0131 04:06:37.441806 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78fd8a04-83bd-43d3-9a36-e116ecb3951a" containerName="keystone-db-sync" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.441815 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="78fd8a04-83bd-43d3-9a36-e116ecb3951a" containerName="keystone-db-sync" Jan 31 04:06:37 crc kubenswrapper[4667]: E0131 04:06:37.441831 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb505db2-7884-475e-9bed-884650bfaeb8" containerName="init" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.441855 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb505db2-7884-475e-9bed-884650bfaeb8" containerName="init" Jan 31 04:06:37 crc kubenswrapper[4667]: E0131 04:06:37.441865 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2edbaebb-5022-48a4-82ab-2cb5b23fae97" containerName="mariadb-account-create-update" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.441871 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="2edbaebb-5022-48a4-82ab-2cb5b23fae97" containerName="mariadb-account-create-update" Jan 31 04:06:37 crc kubenswrapper[4667]: E0131 04:06:37.441880 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="959a81ea-7cf7-4fc4-b84d-14699d4e6bb4" containerName="glance-db-sync" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.441886 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="959a81ea-7cf7-4fc4-b84d-14699d4e6bb4" containerName="glance-db-sync" Jan 31 04:06:37 crc kubenswrapper[4667]: E0131 04:06:37.441897 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb505db2-7884-475e-9bed-884650bfaeb8" containerName="dnsmasq-dns" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.441903 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb505db2-7884-475e-9bed-884650bfaeb8" containerName="dnsmasq-dns" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.442111 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="959a81ea-7cf7-4fc4-b84d-14699d4e6bb4" containerName="glance-db-sync" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.442124 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb505db2-7884-475e-9bed-884650bfaeb8" containerName="dnsmasq-dns" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.442131 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="075dd640-8e38-4b34-b2fb-437599bbeb08" containerName="mariadb-account-create-update" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.442141 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="78fd8a04-83bd-43d3-9a36-e116ecb3951a" containerName="keystone-db-sync" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.442169 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ec06763-5d93-465b-ade2-557cc5072827" containerName="mariadb-database-create" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.442183 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="36961045-4f23-401f-92a0-2fe30920bdf6" containerName="mariadb-account-create-update" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.442190 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="007ac74b-f070-45f8-9cf9-1807ec2563f2" containerName="mariadb-database-create" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.442201 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="2edbaebb-5022-48a4-82ab-2cb5b23fae97" containerName="mariadb-account-create-update" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.442208 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="120c8a1f-7144-4b39-9040-7ffc70da2eb2" containerName="mariadb-database-create" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.443768 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-kk7qf" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.451001 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78fd8a04-83bd-43d3-9a36-e116ecb3951a-config-data" (OuterVolumeSpecName: "config-data") pod "78fd8a04-83bd-43d3-9a36-e116ecb3951a" (UID: "78fd8a04-83bd-43d3-9a36-e116ecb3951a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.467661 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-kk7qf"] Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.536257 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70574dc6-d369-4606-b5f3-b5173c354d7a-config\") pod \"dnsmasq-dns-74f6bcbc87-kk7qf\" (UID: \"70574dc6-d369-4606-b5f3-b5173c354d7a\") " pod="openstack/dnsmasq-dns-74f6bcbc87-kk7qf" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.536314 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70574dc6-d369-4606-b5f3-b5173c354d7a-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-kk7qf\" (UID: \"70574dc6-d369-4606-b5f3-b5173c354d7a\") " pod="openstack/dnsmasq-dns-74f6bcbc87-kk7qf" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.536366 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70574dc6-d369-4606-b5f3-b5173c354d7a-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-kk7qf\" (UID: \"70574dc6-d369-4606-b5f3-b5173c354d7a\") " pod="openstack/dnsmasq-dns-74f6bcbc87-kk7qf" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.536435 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8r25\" (UniqueName: \"kubernetes.io/projected/70574dc6-d369-4606-b5f3-b5173c354d7a-kube-api-access-z8r25\") pod \"dnsmasq-dns-74f6bcbc87-kk7qf\" (UID: \"70574dc6-d369-4606-b5f3-b5173c354d7a\") " pod="openstack/dnsmasq-dns-74f6bcbc87-kk7qf" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.536457 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70574dc6-d369-4606-b5f3-b5173c354d7a-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-kk7qf\" (UID: \"70574dc6-d369-4606-b5f3-b5173c354d7a\") " pod="openstack/dnsmasq-dns-74f6bcbc87-kk7qf" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.536476 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/70574dc6-d369-4606-b5f3-b5173c354d7a-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-kk7qf\" (UID: \"70574dc6-d369-4606-b5f3-b5173c354d7a\") " pod="openstack/dnsmasq-dns-74f6bcbc87-kk7qf" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.536535 4667 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78fd8a04-83bd-43d3-9a36-e116ecb3951a-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.638760 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8r25\" (UniqueName: \"kubernetes.io/projected/70574dc6-d369-4606-b5f3-b5173c354d7a-kube-api-access-z8r25\") pod \"dnsmasq-dns-74f6bcbc87-kk7qf\" (UID: \"70574dc6-d369-4606-b5f3-b5173c354d7a\") " pod="openstack/dnsmasq-dns-74f6bcbc87-kk7qf" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.638811 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70574dc6-d369-4606-b5f3-b5173c354d7a-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-kk7qf\" (UID: \"70574dc6-d369-4606-b5f3-b5173c354d7a\") " pod="openstack/dnsmasq-dns-74f6bcbc87-kk7qf" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.638839 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/70574dc6-d369-4606-b5f3-b5173c354d7a-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-kk7qf\" (UID: \"70574dc6-d369-4606-b5f3-b5173c354d7a\") " pod="openstack/dnsmasq-dns-74f6bcbc87-kk7qf" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.638914 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70574dc6-d369-4606-b5f3-b5173c354d7a-config\") pod \"dnsmasq-dns-74f6bcbc87-kk7qf\" (UID: \"70574dc6-d369-4606-b5f3-b5173c354d7a\") " pod="openstack/dnsmasq-dns-74f6bcbc87-kk7qf" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.638956 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70574dc6-d369-4606-b5f3-b5173c354d7a-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-kk7qf\" (UID: \"70574dc6-d369-4606-b5f3-b5173c354d7a\") " pod="openstack/dnsmasq-dns-74f6bcbc87-kk7qf" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.639007 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70574dc6-d369-4606-b5f3-b5173c354d7a-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-kk7qf\" (UID: \"70574dc6-d369-4606-b5f3-b5173c354d7a\") " pod="openstack/dnsmasq-dns-74f6bcbc87-kk7qf" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.639982 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/70574dc6-d369-4606-b5f3-b5173c354d7a-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-kk7qf\" (UID: \"70574dc6-d369-4606-b5f3-b5173c354d7a\") " pod="openstack/dnsmasq-dns-74f6bcbc87-kk7qf" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.640069 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70574dc6-d369-4606-b5f3-b5173c354d7a-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-kk7qf\" (UID: \"70574dc6-d369-4606-b5f3-b5173c354d7a\") " pod="openstack/dnsmasq-dns-74f6bcbc87-kk7qf" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.640149 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70574dc6-d369-4606-b5f3-b5173c354d7a-config\") pod \"dnsmasq-dns-74f6bcbc87-kk7qf\" (UID: \"70574dc6-d369-4606-b5f3-b5173c354d7a\") " pod="openstack/dnsmasq-dns-74f6bcbc87-kk7qf" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.640567 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70574dc6-d369-4606-b5f3-b5173c354d7a-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-kk7qf\" (UID: \"70574dc6-d369-4606-b5f3-b5173c354d7a\") " pod="openstack/dnsmasq-dns-74f6bcbc87-kk7qf" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.641093 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70574dc6-d369-4606-b5f3-b5173c354d7a-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-kk7qf\" (UID: \"70574dc6-d369-4606-b5f3-b5173c354d7a\") " pod="openstack/dnsmasq-dns-74f6bcbc87-kk7qf" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.659776 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8r25\" (UniqueName: \"kubernetes.io/projected/70574dc6-d369-4606-b5f3-b5173c354d7a-kube-api-access-z8r25\") pod \"dnsmasq-dns-74f6bcbc87-kk7qf\" (UID: \"70574dc6-d369-4606-b5f3-b5173c354d7a\") " pod="openstack/dnsmasq-dns-74f6bcbc87-kk7qf" Jan 31 04:06:37 crc kubenswrapper[4667]: I0131 04:06:37.817049 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-kk7qf" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.011007 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-txc7n" event={"ID":"78fd8a04-83bd-43d3-9a36-e116ecb3951a","Type":"ContainerDied","Data":"93fe4eba14fa1d99bd74728aaacb76bba29f925ec2f0633726ec65b20c9445e8"} Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.011058 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93fe4eba14fa1d99bd74728aaacb76bba29f925ec2f0633726ec65b20c9445e8" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.011158 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-txc7n" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.230810 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-kk7qf"] Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.262657 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-hqk9s"] Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.264011 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-hqk9s" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.283908 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-hqk9s"] Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.329670 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-s2th9"] Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.337915 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-s2th9" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.351427 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.351679 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.351427 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.353269 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-rw7d7" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.353434 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.389801 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/db92e124-da3b-48d4-af73-868133cb57ba-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-hqk9s\" (UID: \"db92e124-da3b-48d4-af73-868133cb57ba\") " pod="openstack/dnsmasq-dns-847c4cc679-hqk9s" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.389882 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db92e124-da3b-48d4-af73-868133cb57ba-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-hqk9s\" (UID: \"db92e124-da3b-48d4-af73-868133cb57ba\") " pod="openstack/dnsmasq-dns-847c4cc679-hqk9s" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.389932 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db92e124-da3b-48d4-af73-868133cb57ba-dns-svc\") pod \"dnsmasq-dns-847c4cc679-hqk9s\" (UID: \"db92e124-da3b-48d4-af73-868133cb57ba\") " pod="openstack/dnsmasq-dns-847c4cc679-hqk9s" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.390022 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db92e124-da3b-48d4-af73-868133cb57ba-config\") pod \"dnsmasq-dns-847c4cc679-hqk9s\" (UID: \"db92e124-da3b-48d4-af73-868133cb57ba\") " pod="openstack/dnsmasq-dns-847c4cc679-hqk9s" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.390095 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k4kx\" (UniqueName: \"kubernetes.io/projected/db92e124-da3b-48d4-af73-868133cb57ba-kube-api-access-5k4kx\") pod \"dnsmasq-dns-847c4cc679-hqk9s\" (UID: \"db92e124-da3b-48d4-af73-868133cb57ba\") " pod="openstack/dnsmasq-dns-847c4cc679-hqk9s" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.390134 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db92e124-da3b-48d4-af73-868133cb57ba-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-hqk9s\" (UID: \"db92e124-da3b-48d4-af73-868133cb57ba\") " pod="openstack/dnsmasq-dns-847c4cc679-hqk9s" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.424134 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-s2th9"] Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.464166 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-kk7qf"] Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.492490 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/db92e124-da3b-48d4-af73-868133cb57ba-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-hqk9s\" (UID: \"db92e124-da3b-48d4-af73-868133cb57ba\") " pod="openstack/dnsmasq-dns-847c4cc679-hqk9s" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.492547 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db92e124-da3b-48d4-af73-868133cb57ba-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-hqk9s\" (UID: \"db92e124-da3b-48d4-af73-868133cb57ba\") " pod="openstack/dnsmasq-dns-847c4cc679-hqk9s" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.492588 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db92e124-da3b-48d4-af73-868133cb57ba-dns-svc\") pod \"dnsmasq-dns-847c4cc679-hqk9s\" (UID: \"db92e124-da3b-48d4-af73-868133cb57ba\") " pod="openstack/dnsmasq-dns-847c4cc679-hqk9s" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.492634 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6f4b88f0-e927-43e5-941e-1c431fb7269c-fernet-keys\") pod \"keystone-bootstrap-s2th9\" (UID: \"6f4b88f0-e927-43e5-941e-1c431fb7269c\") " pod="openstack/keystone-bootstrap-s2th9" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.492681 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6f4b88f0-e927-43e5-941e-1c431fb7269c-credential-keys\") pod \"keystone-bootstrap-s2th9\" (UID: \"6f4b88f0-e927-43e5-941e-1c431fb7269c\") " pod="openstack/keystone-bootstrap-s2th9" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.492711 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f4b88f0-e927-43e5-941e-1c431fb7269c-combined-ca-bundle\") pod \"keystone-bootstrap-s2th9\" (UID: \"6f4b88f0-e927-43e5-941e-1c431fb7269c\") " pod="openstack/keystone-bootstrap-s2th9" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.492746 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db92e124-da3b-48d4-af73-868133cb57ba-config\") pod \"dnsmasq-dns-847c4cc679-hqk9s\" (UID: \"db92e124-da3b-48d4-af73-868133cb57ba\") " pod="openstack/dnsmasq-dns-847c4cc679-hqk9s" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.492779 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lb6c\" (UniqueName: \"kubernetes.io/projected/6f4b88f0-e927-43e5-941e-1c431fb7269c-kube-api-access-9lb6c\") pod \"keystone-bootstrap-s2th9\" (UID: \"6f4b88f0-e927-43e5-941e-1c431fb7269c\") " pod="openstack/keystone-bootstrap-s2th9" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.492822 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k4kx\" (UniqueName: \"kubernetes.io/projected/db92e124-da3b-48d4-af73-868133cb57ba-kube-api-access-5k4kx\") pod \"dnsmasq-dns-847c4cc679-hqk9s\" (UID: \"db92e124-da3b-48d4-af73-868133cb57ba\") " pod="openstack/dnsmasq-dns-847c4cc679-hqk9s" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.492881 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db92e124-da3b-48d4-af73-868133cb57ba-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-hqk9s\" (UID: \"db92e124-da3b-48d4-af73-868133cb57ba\") " pod="openstack/dnsmasq-dns-847c4cc679-hqk9s" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.492910 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f4b88f0-e927-43e5-941e-1c431fb7269c-config-data\") pod \"keystone-bootstrap-s2th9\" (UID: \"6f4b88f0-e927-43e5-941e-1c431fb7269c\") " pod="openstack/keystone-bootstrap-s2th9" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.492934 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f4b88f0-e927-43e5-941e-1c431fb7269c-scripts\") pod \"keystone-bootstrap-s2th9\" (UID: \"6f4b88f0-e927-43e5-941e-1c431fb7269c\") " pod="openstack/keystone-bootstrap-s2th9" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.494081 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/db92e124-da3b-48d4-af73-868133cb57ba-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-hqk9s\" (UID: \"db92e124-da3b-48d4-af73-868133cb57ba\") " pod="openstack/dnsmasq-dns-847c4cc679-hqk9s" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.494666 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db92e124-da3b-48d4-af73-868133cb57ba-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-hqk9s\" (UID: \"db92e124-da3b-48d4-af73-868133cb57ba\") " pod="openstack/dnsmasq-dns-847c4cc679-hqk9s" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.502832 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db92e124-da3b-48d4-af73-868133cb57ba-dns-svc\") pod \"dnsmasq-dns-847c4cc679-hqk9s\" (UID: \"db92e124-da3b-48d4-af73-868133cb57ba\") " pod="openstack/dnsmasq-dns-847c4cc679-hqk9s" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.502879 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db92e124-da3b-48d4-af73-868133cb57ba-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-hqk9s\" (UID: \"db92e124-da3b-48d4-af73-868133cb57ba\") " pod="openstack/dnsmasq-dns-847c4cc679-hqk9s" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.503032 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db92e124-da3b-48d4-af73-868133cb57ba-config\") pod \"dnsmasq-dns-847c4cc679-hqk9s\" (UID: \"db92e124-da3b-48d4-af73-868133cb57ba\") " pod="openstack/dnsmasq-dns-847c4cc679-hqk9s" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.598165 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k4kx\" (UniqueName: \"kubernetes.io/projected/db92e124-da3b-48d4-af73-868133cb57ba-kube-api-access-5k4kx\") pod \"dnsmasq-dns-847c4cc679-hqk9s\" (UID: \"db92e124-da3b-48d4-af73-868133cb57ba\") " pod="openstack/dnsmasq-dns-847c4cc679-hqk9s" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.610446 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-hqk9s" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.611531 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6f4b88f0-e927-43e5-941e-1c431fb7269c-fernet-keys\") pod \"keystone-bootstrap-s2th9\" (UID: \"6f4b88f0-e927-43e5-941e-1c431fb7269c\") " pod="openstack/keystone-bootstrap-s2th9" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.611597 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6f4b88f0-e927-43e5-941e-1c431fb7269c-credential-keys\") pod \"keystone-bootstrap-s2th9\" (UID: \"6f4b88f0-e927-43e5-941e-1c431fb7269c\") " pod="openstack/keystone-bootstrap-s2th9" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.611622 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f4b88f0-e927-43e5-941e-1c431fb7269c-combined-ca-bundle\") pod \"keystone-bootstrap-s2th9\" (UID: \"6f4b88f0-e927-43e5-941e-1c431fb7269c\") " pod="openstack/keystone-bootstrap-s2th9" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.611654 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lb6c\" (UniqueName: \"kubernetes.io/projected/6f4b88f0-e927-43e5-941e-1c431fb7269c-kube-api-access-9lb6c\") pod \"keystone-bootstrap-s2th9\" (UID: \"6f4b88f0-e927-43e5-941e-1c431fb7269c\") " pod="openstack/keystone-bootstrap-s2th9" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.611700 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f4b88f0-e927-43e5-941e-1c431fb7269c-config-data\") pod \"keystone-bootstrap-s2th9\" (UID: \"6f4b88f0-e927-43e5-941e-1c431fb7269c\") " pod="openstack/keystone-bootstrap-s2th9" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.611717 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f4b88f0-e927-43e5-941e-1c431fb7269c-scripts\") pod \"keystone-bootstrap-s2th9\" (UID: \"6f4b88f0-e927-43e5-941e-1c431fb7269c\") " pod="openstack/keystone-bootstrap-s2th9" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.615371 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f4b88f0-e927-43e5-941e-1c431fb7269c-combined-ca-bundle\") pod \"keystone-bootstrap-s2th9\" (UID: \"6f4b88f0-e927-43e5-941e-1c431fb7269c\") " pod="openstack/keystone-bootstrap-s2th9" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.624436 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6f4b88f0-e927-43e5-941e-1c431fb7269c-fernet-keys\") pod \"keystone-bootstrap-s2th9\" (UID: \"6f4b88f0-e927-43e5-941e-1c431fb7269c\") " pod="openstack/keystone-bootstrap-s2th9" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.632984 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f4b88f0-e927-43e5-941e-1c431fb7269c-config-data\") pod \"keystone-bootstrap-s2th9\" (UID: \"6f4b88f0-e927-43e5-941e-1c431fb7269c\") " pod="openstack/keystone-bootstrap-s2th9" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.633630 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f4b88f0-e927-43e5-941e-1c431fb7269c-scripts\") pod \"keystone-bootstrap-s2th9\" (UID: \"6f4b88f0-e927-43e5-941e-1c431fb7269c\") " pod="openstack/keystone-bootstrap-s2th9" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.635334 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6f4b88f0-e927-43e5-941e-1c431fb7269c-credential-keys\") pod \"keystone-bootstrap-s2th9\" (UID: \"6f4b88f0-e927-43e5-941e-1c431fb7269c\") " pod="openstack/keystone-bootstrap-s2th9" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.733621 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lb6c\" (UniqueName: \"kubernetes.io/projected/6f4b88f0-e927-43e5-941e-1c431fb7269c-kube-api-access-9lb6c\") pod \"keystone-bootstrap-s2th9\" (UID: \"6f4b88f0-e927-43e5-941e-1c431fb7269c\") " pod="openstack/keystone-bootstrap-s2th9" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.877674 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.881038 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.950732 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.957285 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.958760 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-s2th9" Jan 31 04:06:38 crc kubenswrapper[4667]: I0131 04:06:38.959473 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.025229 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c175848a-4645-42e7-8ccc-ab873e1ff7aa-log-httpd\") pod \"ceilometer-0\" (UID: \"c175848a-4645-42e7-8ccc-ab873e1ff7aa\") " pod="openstack/ceilometer-0" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.025291 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bht9f\" (UniqueName: \"kubernetes.io/projected/c175848a-4645-42e7-8ccc-ab873e1ff7aa-kube-api-access-bht9f\") pod \"ceilometer-0\" (UID: \"c175848a-4645-42e7-8ccc-ab873e1ff7aa\") " pod="openstack/ceilometer-0" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.025362 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c175848a-4645-42e7-8ccc-ab873e1ff7aa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c175848a-4645-42e7-8ccc-ab873e1ff7aa\") " pod="openstack/ceilometer-0" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.025389 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c175848a-4645-42e7-8ccc-ab873e1ff7aa-scripts\") pod \"ceilometer-0\" (UID: \"c175848a-4645-42e7-8ccc-ab873e1ff7aa\") " pod="openstack/ceilometer-0" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.025470 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c175848a-4645-42e7-8ccc-ab873e1ff7aa-run-httpd\") pod \"ceilometer-0\" (UID: \"c175848a-4645-42e7-8ccc-ab873e1ff7aa\") " pod="openstack/ceilometer-0" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.025499 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c175848a-4645-42e7-8ccc-ab873e1ff7aa-config-data\") pod \"ceilometer-0\" (UID: \"c175848a-4645-42e7-8ccc-ab873e1ff7aa\") " pod="openstack/ceilometer-0" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.025526 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c175848a-4645-42e7-8ccc-ab873e1ff7aa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c175848a-4645-42e7-8ccc-ab873e1ff7aa\") " pod="openstack/ceilometer-0" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.057810 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6b97688f77-kkzs5"] Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.062364 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b97688f77-kkzs5" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.067202 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-kk7qf" event={"ID":"70574dc6-d369-4606-b5f3-b5173c354d7a","Type":"ContainerStarted","Data":"69f2b79d3936fdb1ac8627bdeed780d0ed8a92fcbd350d271e93b257dcb0ee6e"} Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.101464 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.101740 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.110131 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-ctjps" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.110439 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.133898 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c175848a-4645-42e7-8ccc-ab873e1ff7aa-log-httpd\") pod \"ceilometer-0\" (UID: \"c175848a-4645-42e7-8ccc-ab873e1ff7aa\") " pod="openstack/ceilometer-0" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.134336 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bht9f\" (UniqueName: \"kubernetes.io/projected/c175848a-4645-42e7-8ccc-ab873e1ff7aa-kube-api-access-bht9f\") pod \"ceilometer-0\" (UID: \"c175848a-4645-42e7-8ccc-ab873e1ff7aa\") " pod="openstack/ceilometer-0" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.134368 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/15d05b5e-da40-49f4-8556-f6a192a9f776-scripts\") pod \"horizon-6b97688f77-kkzs5\" (UID: \"15d05b5e-da40-49f4-8556-f6a192a9f776\") " pod="openstack/horizon-6b97688f77-kkzs5" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.134397 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/15d05b5e-da40-49f4-8556-f6a192a9f776-horizon-secret-key\") pod \"horizon-6b97688f77-kkzs5\" (UID: \"15d05b5e-da40-49f4-8556-f6a192a9f776\") " pod="openstack/horizon-6b97688f77-kkzs5" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.134434 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c175848a-4645-42e7-8ccc-ab873e1ff7aa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c175848a-4645-42e7-8ccc-ab873e1ff7aa\") " pod="openstack/ceilometer-0" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.134454 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c175848a-4645-42e7-8ccc-ab873e1ff7aa-scripts\") pod \"ceilometer-0\" (UID: \"c175848a-4645-42e7-8ccc-ab873e1ff7aa\") " pod="openstack/ceilometer-0" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.134502 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjsl6\" (UniqueName: \"kubernetes.io/projected/15d05b5e-da40-49f4-8556-f6a192a9f776-kube-api-access-tjsl6\") pod \"horizon-6b97688f77-kkzs5\" (UID: \"15d05b5e-da40-49f4-8556-f6a192a9f776\") " pod="openstack/horizon-6b97688f77-kkzs5" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.134531 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c175848a-4645-42e7-8ccc-ab873e1ff7aa-run-httpd\") pod \"ceilometer-0\" (UID: \"c175848a-4645-42e7-8ccc-ab873e1ff7aa\") " pod="openstack/ceilometer-0" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.134550 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15d05b5e-da40-49f4-8556-f6a192a9f776-logs\") pod \"horizon-6b97688f77-kkzs5\" (UID: \"15d05b5e-da40-49f4-8556-f6a192a9f776\") " pod="openstack/horizon-6b97688f77-kkzs5" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.134574 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c175848a-4645-42e7-8ccc-ab873e1ff7aa-config-data\") pod \"ceilometer-0\" (UID: \"c175848a-4645-42e7-8ccc-ab873e1ff7aa\") " pod="openstack/ceilometer-0" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.135065 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c175848a-4645-42e7-8ccc-ab873e1ff7aa-log-httpd\") pod \"ceilometer-0\" (UID: \"c175848a-4645-42e7-8ccc-ab873e1ff7aa\") " pod="openstack/ceilometer-0" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.137017 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c175848a-4645-42e7-8ccc-ab873e1ff7aa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c175848a-4645-42e7-8ccc-ab873e1ff7aa\") " pod="openstack/ceilometer-0" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.137059 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/15d05b5e-da40-49f4-8556-f6a192a9f776-config-data\") pod \"horizon-6b97688f77-kkzs5\" (UID: \"15d05b5e-da40-49f4-8556-f6a192a9f776\") " pod="openstack/horizon-6b97688f77-kkzs5" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.137359 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c175848a-4645-42e7-8ccc-ab873e1ff7aa-run-httpd\") pod \"ceilometer-0\" (UID: \"c175848a-4645-42e7-8ccc-ab873e1ff7aa\") " pod="openstack/ceilometer-0" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.146297 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c175848a-4645-42e7-8ccc-ab873e1ff7aa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c175848a-4645-42e7-8ccc-ab873e1ff7aa\") " pod="openstack/ceilometer-0" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.146979 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c175848a-4645-42e7-8ccc-ab873e1ff7aa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c175848a-4645-42e7-8ccc-ab873e1ff7aa\") " pod="openstack/ceilometer-0" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.149205 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c175848a-4645-42e7-8ccc-ab873e1ff7aa-scripts\") pod \"ceilometer-0\" (UID: \"c175848a-4645-42e7-8ccc-ab873e1ff7aa\") " pod="openstack/ceilometer-0" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.153267 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-245c9"] Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.155092 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-245c9" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.169317 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c175848a-4645-42e7-8ccc-ab873e1ff7aa-config-data\") pod \"ceilometer-0\" (UID: \"c175848a-4645-42e7-8ccc-ab873e1ff7aa\") " pod="openstack/ceilometer-0" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.199471 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6b97688f77-kkzs5"] Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.224966 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.225197 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-9b928" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.225205 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.243208 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bc9db8ae-2f60-4efd-9a11-4aac5f336900-etc-machine-id\") pod \"cinder-db-sync-245c9\" (UID: \"bc9db8ae-2f60-4efd-9a11-4aac5f336900\") " pod="openstack/cinder-db-sync-245c9" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.243343 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15d05b5e-da40-49f4-8556-f6a192a9f776-logs\") pod \"horizon-6b97688f77-kkzs5\" (UID: \"15d05b5e-da40-49f4-8556-f6a192a9f776\") " pod="openstack/horizon-6b97688f77-kkzs5" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.243417 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gtrc\" (UniqueName: \"kubernetes.io/projected/bc9db8ae-2f60-4efd-9a11-4aac5f336900-kube-api-access-6gtrc\") pod \"cinder-db-sync-245c9\" (UID: \"bc9db8ae-2f60-4efd-9a11-4aac5f336900\") " pod="openstack/cinder-db-sync-245c9" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.243456 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/15d05b5e-da40-49f4-8556-f6a192a9f776-config-data\") pod \"horizon-6b97688f77-kkzs5\" (UID: \"15d05b5e-da40-49f4-8556-f6a192a9f776\") " pod="openstack/horizon-6b97688f77-kkzs5" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.246205 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc9db8ae-2f60-4efd-9a11-4aac5f336900-config-data\") pod \"cinder-db-sync-245c9\" (UID: \"bc9db8ae-2f60-4efd-9a11-4aac5f336900\") " pod="openstack/cinder-db-sync-245c9" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.246313 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc9db8ae-2f60-4efd-9a11-4aac5f336900-combined-ca-bundle\") pod \"cinder-db-sync-245c9\" (UID: \"bc9db8ae-2f60-4efd-9a11-4aac5f336900\") " pod="openstack/cinder-db-sync-245c9" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.246338 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/15d05b5e-da40-49f4-8556-f6a192a9f776-scripts\") pod \"horizon-6b97688f77-kkzs5\" (UID: \"15d05b5e-da40-49f4-8556-f6a192a9f776\") " pod="openstack/horizon-6b97688f77-kkzs5" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.246383 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/15d05b5e-da40-49f4-8556-f6a192a9f776-horizon-secret-key\") pod \"horizon-6b97688f77-kkzs5\" (UID: \"15d05b5e-da40-49f4-8556-f6a192a9f776\") " pod="openstack/horizon-6b97688f77-kkzs5" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.246578 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc9db8ae-2f60-4efd-9a11-4aac5f336900-scripts\") pod \"cinder-db-sync-245c9\" (UID: \"bc9db8ae-2f60-4efd-9a11-4aac5f336900\") " pod="openstack/cinder-db-sync-245c9" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.246667 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjsl6\" (UniqueName: \"kubernetes.io/projected/15d05b5e-da40-49f4-8556-f6a192a9f776-kube-api-access-tjsl6\") pod \"horizon-6b97688f77-kkzs5\" (UID: \"15d05b5e-da40-49f4-8556-f6a192a9f776\") " pod="openstack/horizon-6b97688f77-kkzs5" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.246690 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bc9db8ae-2f60-4efd-9a11-4aac5f336900-db-sync-config-data\") pod \"cinder-db-sync-245c9\" (UID: \"bc9db8ae-2f60-4efd-9a11-4aac5f336900\") " pod="openstack/cinder-db-sync-245c9" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.247123 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15d05b5e-da40-49f4-8556-f6a192a9f776-logs\") pod \"horizon-6b97688f77-kkzs5\" (UID: \"15d05b5e-da40-49f4-8556-f6a192a9f776\") " pod="openstack/horizon-6b97688f77-kkzs5" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.248228 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/15d05b5e-da40-49f4-8556-f6a192a9f776-config-data\") pod \"horizon-6b97688f77-kkzs5\" (UID: \"15d05b5e-da40-49f4-8556-f6a192a9f776\") " pod="openstack/horizon-6b97688f77-kkzs5" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.250816 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-245c9"] Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.263246 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bht9f\" (UniqueName: \"kubernetes.io/projected/c175848a-4645-42e7-8ccc-ab873e1ff7aa-kube-api-access-bht9f\") pod \"ceilometer-0\" (UID: \"c175848a-4645-42e7-8ccc-ab873e1ff7aa\") " pod="openstack/ceilometer-0" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.311453 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/15d05b5e-da40-49f4-8556-f6a192a9f776-scripts\") pod \"horizon-6b97688f77-kkzs5\" (UID: \"15d05b5e-da40-49f4-8556-f6a192a9f776\") " pod="openstack/horizon-6b97688f77-kkzs5" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.380587 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bc9db8ae-2f60-4efd-9a11-4aac5f336900-db-sync-config-data\") pod \"cinder-db-sync-245c9\" (UID: \"bc9db8ae-2f60-4efd-9a11-4aac5f336900\") " pod="openstack/cinder-db-sync-245c9" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.381051 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bc9db8ae-2f60-4efd-9a11-4aac5f336900-etc-machine-id\") pod \"cinder-db-sync-245c9\" (UID: \"bc9db8ae-2f60-4efd-9a11-4aac5f336900\") " pod="openstack/cinder-db-sync-245c9" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.381149 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gtrc\" (UniqueName: \"kubernetes.io/projected/bc9db8ae-2f60-4efd-9a11-4aac5f336900-kube-api-access-6gtrc\") pod \"cinder-db-sync-245c9\" (UID: \"bc9db8ae-2f60-4efd-9a11-4aac5f336900\") " pod="openstack/cinder-db-sync-245c9" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.381207 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc9db8ae-2f60-4efd-9a11-4aac5f336900-config-data\") pod \"cinder-db-sync-245c9\" (UID: \"bc9db8ae-2f60-4efd-9a11-4aac5f336900\") " pod="openstack/cinder-db-sync-245c9" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.381300 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc9db8ae-2f60-4efd-9a11-4aac5f336900-combined-ca-bundle\") pod \"cinder-db-sync-245c9\" (UID: \"bc9db8ae-2f60-4efd-9a11-4aac5f336900\") " pod="openstack/cinder-db-sync-245c9" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.382948 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bc9db8ae-2f60-4efd-9a11-4aac5f336900-etc-machine-id\") pod \"cinder-db-sync-245c9\" (UID: \"bc9db8ae-2f60-4efd-9a11-4aac5f336900\") " pod="openstack/cinder-db-sync-245c9" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.394142 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc9db8ae-2f60-4efd-9a11-4aac5f336900-combined-ca-bundle\") pod \"cinder-db-sync-245c9\" (UID: \"bc9db8ae-2f60-4efd-9a11-4aac5f336900\") " pod="openstack/cinder-db-sync-245c9" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.401529 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bc9db8ae-2f60-4efd-9a11-4aac5f336900-db-sync-config-data\") pod \"cinder-db-sync-245c9\" (UID: \"bc9db8ae-2f60-4efd-9a11-4aac5f336900\") " pod="openstack/cinder-db-sync-245c9" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.415263 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc9db8ae-2f60-4efd-9a11-4aac5f336900-config-data\") pod \"cinder-db-sync-245c9\" (UID: \"bc9db8ae-2f60-4efd-9a11-4aac5f336900\") " pod="openstack/cinder-db-sync-245c9" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.468731 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/15d05b5e-da40-49f4-8556-f6a192a9f776-horizon-secret-key\") pod \"horizon-6b97688f77-kkzs5\" (UID: \"15d05b5e-da40-49f4-8556-f6a192a9f776\") " pod="openstack/horizon-6b97688f77-kkzs5" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.494953 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc9db8ae-2f60-4efd-9a11-4aac5f336900-scripts\") pod \"cinder-db-sync-245c9\" (UID: \"bc9db8ae-2f60-4efd-9a11-4aac5f336900\") " pod="openstack/cinder-db-sync-245c9" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.508486 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-vvkc5"] Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.510571 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.522375 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc9db8ae-2f60-4efd-9a11-4aac5f336900-scripts\") pod \"cinder-db-sync-245c9\" (UID: \"bc9db8ae-2f60-4efd-9a11-4aac5f336900\") " pod="openstack/cinder-db-sync-245c9" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.529492 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjsl6\" (UniqueName: \"kubernetes.io/projected/15d05b5e-da40-49f4-8556-f6a192a9f776-kube-api-access-tjsl6\") pod \"horizon-6b97688f77-kkzs5\" (UID: \"15d05b5e-da40-49f4-8556-f6a192a9f776\") " pod="openstack/horizon-6b97688f77-kkzs5" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.531415 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vvkc5" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.574311 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-hlkh6" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.574674 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.579142 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.595944 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gtrc\" (UniqueName: \"kubernetes.io/projected/bc9db8ae-2f60-4efd-9a11-4aac5f336900-kube-api-access-6gtrc\") pod \"cinder-db-sync-245c9\" (UID: \"bc9db8ae-2f60-4efd-9a11-4aac5f336900\") " pod="openstack/cinder-db-sync-245c9" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.603691 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b58d4b49-fb58-480e-9a43-2675ce1fc0c1-combined-ca-bundle\") pod \"neutron-db-sync-vvkc5\" (UID: \"b58d4b49-fb58-480e-9a43-2675ce1fc0c1\") " pod="openstack/neutron-db-sync-vvkc5" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.603885 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2btf5\" (UniqueName: \"kubernetes.io/projected/b58d4b49-fb58-480e-9a43-2675ce1fc0c1-kube-api-access-2btf5\") pod \"neutron-db-sync-vvkc5\" (UID: \"b58d4b49-fb58-480e-9a43-2675ce1fc0c1\") " pod="openstack/neutron-db-sync-vvkc5" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.603997 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b58d4b49-fb58-480e-9a43-2675ce1fc0c1-config\") pod \"neutron-db-sync-vvkc5\" (UID: \"b58d4b49-fb58-480e-9a43-2675ce1fc0c1\") " pod="openstack/neutron-db-sync-vvkc5" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.614501 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-vvkc5"] Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.693402 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b97688f77-kkzs5" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.706047 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b58d4b49-fb58-480e-9a43-2675ce1fc0c1-config\") pod \"neutron-db-sync-vvkc5\" (UID: \"b58d4b49-fb58-480e-9a43-2675ce1fc0c1\") " pod="openstack/neutron-db-sync-vvkc5" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.706101 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b58d4b49-fb58-480e-9a43-2675ce1fc0c1-combined-ca-bundle\") pod \"neutron-db-sync-vvkc5\" (UID: \"b58d4b49-fb58-480e-9a43-2675ce1fc0c1\") " pod="openstack/neutron-db-sync-vvkc5" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.706196 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2btf5\" (UniqueName: \"kubernetes.io/projected/b58d4b49-fb58-480e-9a43-2675ce1fc0c1-kube-api-access-2btf5\") pod \"neutron-db-sync-vvkc5\" (UID: \"b58d4b49-fb58-480e-9a43-2675ce1fc0c1\") " pod="openstack/neutron-db-sync-vvkc5" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.723599 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b58d4b49-fb58-480e-9a43-2675ce1fc0c1-combined-ca-bundle\") pod \"neutron-db-sync-vvkc5\" (UID: \"b58d4b49-fb58-480e-9a43-2675ce1fc0c1\") " pod="openstack/neutron-db-sync-vvkc5" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.730550 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b58d4b49-fb58-480e-9a43-2675ce1fc0c1-config\") pod \"neutron-db-sync-vvkc5\" (UID: \"b58d4b49-fb58-480e-9a43-2675ce1fc0c1\") " pod="openstack/neutron-db-sync-vvkc5" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.752762 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2btf5\" (UniqueName: \"kubernetes.io/projected/b58d4b49-fb58-480e-9a43-2675ce1fc0c1-kube-api-access-2btf5\") pod \"neutron-db-sync-vvkc5\" (UID: \"b58d4b49-fb58-480e-9a43-2675ce1fc0c1\") " pod="openstack/neutron-db-sync-vvkc5" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.768979 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-4nj2p"] Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.776664 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4nj2p" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.786364 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-wnb99" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.786598 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.802307 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-4nj2p"] Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.811652 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b6bac61-1103-438b-9e75-f3d6b6902270-combined-ca-bundle\") pod \"barbican-db-sync-4nj2p\" (UID: \"7b6bac61-1103-438b-9e75-f3d6b6902270\") " pod="openstack/barbican-db-sync-4nj2p" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.811704 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6t7c\" (UniqueName: \"kubernetes.io/projected/7b6bac61-1103-438b-9e75-f3d6b6902270-kube-api-access-x6t7c\") pod \"barbican-db-sync-4nj2p\" (UID: \"7b6bac61-1103-438b-9e75-f3d6b6902270\") " pod="openstack/barbican-db-sync-4nj2p" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.811892 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7b6bac61-1103-438b-9e75-f3d6b6902270-db-sync-config-data\") pod \"barbican-db-sync-4nj2p\" (UID: \"7b6bac61-1103-438b-9e75-f3d6b6902270\") " pod="openstack/barbican-db-sync-4nj2p" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.855006 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-mkdm4"] Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.856308 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-mkdm4" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.858502 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-245c9" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.880043 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-vttwk" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.890383 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.891914 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.908064 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.908708 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-8pgdz" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.920566 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.921003 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.936371 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7b6bac61-1103-438b-9e75-f3d6b6902270-db-sync-config-data\") pod \"barbican-db-sync-4nj2p\" (UID: \"7b6bac61-1103-438b-9e75-f3d6b6902270\") " pod="openstack/barbican-db-sync-4nj2p" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.936417 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b6bac61-1103-438b-9e75-f3d6b6902270-combined-ca-bundle\") pod \"barbican-db-sync-4nj2p\" (UID: \"7b6bac61-1103-438b-9e75-f3d6b6902270\") " pod="openstack/barbican-db-sync-4nj2p" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.936439 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6t7c\" (UniqueName: \"kubernetes.io/projected/7b6bac61-1103-438b-9e75-f3d6b6902270-kube-api-access-x6t7c\") pod \"barbican-db-sync-4nj2p\" (UID: \"7b6bac61-1103-438b-9e75-f3d6b6902270\") " pod="openstack/barbican-db-sync-4nj2p" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.937055 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.958085 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vvkc5" Jan 31 04:06:39 crc kubenswrapper[4667]: I0131 04:06:39.991519 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b6bac61-1103-438b-9e75-f3d6b6902270-combined-ca-bundle\") pod \"barbican-db-sync-4nj2p\" (UID: \"7b6bac61-1103-438b-9e75-f3d6b6902270\") " pod="openstack/barbican-db-sync-4nj2p" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:39.997701 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7b6bac61-1103-438b-9e75-f3d6b6902270-db-sync-config-data\") pod \"barbican-db-sync-4nj2p\" (UID: \"7b6bac61-1103-438b-9e75-f3d6b6902270\") " pod="openstack/barbican-db-sync-4nj2p" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.028706 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6t7c\" (UniqueName: \"kubernetes.io/projected/7b6bac61-1103-438b-9e75-f3d6b6902270-kube-api-access-x6t7c\") pod \"barbican-db-sync-4nj2p\" (UID: \"7b6bac61-1103-438b-9e75-f3d6b6902270\") " pod="openstack/barbican-db-sync-4nj2p" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.038580 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c3512ca-ae09-4231-920d-7b0694041097-scripts\") pod \"glance-default-external-api-0\" (UID: \"1c3512ca-ae09-4231-920d-7b0694041097\") " pod="openstack/glance-default-external-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.038631 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c3512ca-ae09-4231-920d-7b0694041097-config-data\") pod \"glance-default-external-api-0\" (UID: \"1c3512ca-ae09-4231-920d-7b0694041097\") " pod="openstack/glance-default-external-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.038669 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c3512ca-ae09-4231-920d-7b0694041097-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1c3512ca-ae09-4231-920d-7b0694041097\") " pod="openstack/glance-default-external-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.038720 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c3512ca-ae09-4231-920d-7b0694041097-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1c3512ca-ae09-4231-920d-7b0694041097\") " pod="openstack/glance-default-external-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.038751 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw9fx\" (UniqueName: \"kubernetes.io/projected/1c3512ca-ae09-4231-920d-7b0694041097-kube-api-access-nw9fx\") pod \"glance-default-external-api-0\" (UID: \"1c3512ca-ae09-4231-920d-7b0694041097\") " pod="openstack/glance-default-external-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.038769 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv7r6\" (UniqueName: \"kubernetes.io/projected/6e23be1c-6ab2-442e-b12e-e4083c274a67-kube-api-access-xv7r6\") pod \"placement-db-sync-mkdm4\" (UID: \"6e23be1c-6ab2-442e-b12e-e4083c274a67\") " pod="openstack/placement-db-sync-mkdm4" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.038805 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"1c3512ca-ae09-4231-920d-7b0694041097\") " pod="openstack/glance-default-external-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.038836 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e23be1c-6ab2-442e-b12e-e4083c274a67-logs\") pod \"placement-db-sync-mkdm4\" (UID: \"6e23be1c-6ab2-442e-b12e-e4083c274a67\") " pod="openstack/placement-db-sync-mkdm4" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.038874 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c3512ca-ae09-4231-920d-7b0694041097-logs\") pod \"glance-default-external-api-0\" (UID: \"1c3512ca-ae09-4231-920d-7b0694041097\") " pod="openstack/glance-default-external-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.038897 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e23be1c-6ab2-442e-b12e-e4083c274a67-config-data\") pod \"placement-db-sync-mkdm4\" (UID: \"6e23be1c-6ab2-442e-b12e-e4083c274a67\") " pod="openstack/placement-db-sync-mkdm4" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.038918 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e23be1c-6ab2-442e-b12e-e4083c274a67-combined-ca-bundle\") pod \"placement-db-sync-mkdm4\" (UID: \"6e23be1c-6ab2-442e-b12e-e4083c274a67\") " pod="openstack/placement-db-sync-mkdm4" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.038938 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e23be1c-6ab2-442e-b12e-e4083c274a67-scripts\") pod \"placement-db-sync-mkdm4\" (UID: \"6e23be1c-6ab2-442e-b12e-e4083c274a67\") " pod="openstack/placement-db-sync-mkdm4" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.049011 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-mkdm4"] Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.109316 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.127181 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4nj2p" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.144662 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c3512ca-ae09-4231-920d-7b0694041097-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1c3512ca-ae09-4231-920d-7b0694041097\") " pod="openstack/glance-default-external-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.144840 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c3512ca-ae09-4231-920d-7b0694041097-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1c3512ca-ae09-4231-920d-7b0694041097\") " pod="openstack/glance-default-external-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.144944 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw9fx\" (UniqueName: \"kubernetes.io/projected/1c3512ca-ae09-4231-920d-7b0694041097-kube-api-access-nw9fx\") pod \"glance-default-external-api-0\" (UID: \"1c3512ca-ae09-4231-920d-7b0694041097\") " pod="openstack/glance-default-external-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.145025 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv7r6\" (UniqueName: \"kubernetes.io/projected/6e23be1c-6ab2-442e-b12e-e4083c274a67-kube-api-access-xv7r6\") pod \"placement-db-sync-mkdm4\" (UID: \"6e23be1c-6ab2-442e-b12e-e4083c274a67\") " pod="openstack/placement-db-sync-mkdm4" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.145098 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"1c3512ca-ae09-4231-920d-7b0694041097\") " pod="openstack/glance-default-external-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.145173 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e23be1c-6ab2-442e-b12e-e4083c274a67-logs\") pod \"placement-db-sync-mkdm4\" (UID: \"6e23be1c-6ab2-442e-b12e-e4083c274a67\") " pod="openstack/placement-db-sync-mkdm4" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.145244 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c3512ca-ae09-4231-920d-7b0694041097-logs\") pod \"glance-default-external-api-0\" (UID: \"1c3512ca-ae09-4231-920d-7b0694041097\") " pod="openstack/glance-default-external-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.145318 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e23be1c-6ab2-442e-b12e-e4083c274a67-config-data\") pod \"placement-db-sync-mkdm4\" (UID: \"6e23be1c-6ab2-442e-b12e-e4083c274a67\") " pod="openstack/placement-db-sync-mkdm4" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.145385 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e23be1c-6ab2-442e-b12e-e4083c274a67-combined-ca-bundle\") pod \"placement-db-sync-mkdm4\" (UID: \"6e23be1c-6ab2-442e-b12e-e4083c274a67\") " pod="openstack/placement-db-sync-mkdm4" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.145451 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e23be1c-6ab2-442e-b12e-e4083c274a67-scripts\") pod \"placement-db-sync-mkdm4\" (UID: \"6e23be1c-6ab2-442e-b12e-e4083c274a67\") " pod="openstack/placement-db-sync-mkdm4" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.145524 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c3512ca-ae09-4231-920d-7b0694041097-scripts\") pod \"glance-default-external-api-0\" (UID: \"1c3512ca-ae09-4231-920d-7b0694041097\") " pod="openstack/glance-default-external-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.145589 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c3512ca-ae09-4231-920d-7b0694041097-config-data\") pod \"glance-default-external-api-0\" (UID: \"1c3512ca-ae09-4231-920d-7b0694041097\") " pod="openstack/glance-default-external-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.147467 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c3512ca-ae09-4231-920d-7b0694041097-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1c3512ca-ae09-4231-920d-7b0694041097\") " pod="openstack/glance-default-external-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.149151 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c3512ca-ae09-4231-920d-7b0694041097-logs\") pod \"glance-default-external-api-0\" (UID: \"1c3512ca-ae09-4231-920d-7b0694041097\") " pod="openstack/glance-default-external-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.156572 4667 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"1c3512ca-ae09-4231-920d-7b0694041097\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.159769 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e23be1c-6ab2-442e-b12e-e4083c274a67-config-data\") pod \"placement-db-sync-mkdm4\" (UID: \"6e23be1c-6ab2-442e-b12e-e4083c274a67\") " pod="openstack/placement-db-sync-mkdm4" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.160425 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e23be1c-6ab2-442e-b12e-e4083c274a67-scripts\") pod \"placement-db-sync-mkdm4\" (UID: \"6e23be1c-6ab2-442e-b12e-e4083c274a67\") " pod="openstack/placement-db-sync-mkdm4" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.162403 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e23be1c-6ab2-442e-b12e-e4083c274a67-logs\") pod \"placement-db-sync-mkdm4\" (UID: \"6e23be1c-6ab2-442e-b12e-e4083c274a67\") " pod="openstack/placement-db-sync-mkdm4" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.170904 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c3512ca-ae09-4231-920d-7b0694041097-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1c3512ca-ae09-4231-920d-7b0694041097\") " pod="openstack/glance-default-external-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.176349 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c3512ca-ae09-4231-920d-7b0694041097-config-data\") pod \"glance-default-external-api-0\" (UID: \"1c3512ca-ae09-4231-920d-7b0694041097\") " pod="openstack/glance-default-external-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.181694 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c3512ca-ae09-4231-920d-7b0694041097-scripts\") pod \"glance-default-external-api-0\" (UID: \"1c3512ca-ae09-4231-920d-7b0694041097\") " pod="openstack/glance-default-external-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.181950 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e23be1c-6ab2-442e-b12e-e4083c274a67-combined-ca-bundle\") pod \"placement-db-sync-mkdm4\" (UID: \"6e23be1c-6ab2-442e-b12e-e4083c274a67\") " pod="openstack/placement-db-sync-mkdm4" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.183809 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw9fx\" (UniqueName: \"kubernetes.io/projected/1c3512ca-ae09-4231-920d-7b0694041097-kube-api-access-nw9fx\") pod \"glance-default-external-api-0\" (UID: \"1c3512ca-ae09-4231-920d-7b0694041097\") " pod="openstack/glance-default-external-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.183907 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-kk7qf" event={"ID":"70574dc6-d369-4606-b5f3-b5173c354d7a","Type":"ContainerStarted","Data":"fff18d44740ab99af923a4dd78bc1d7ce61c5556feed21280bba43454fd45416"} Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.204442 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv7r6\" (UniqueName: \"kubernetes.io/projected/6e23be1c-6ab2-442e-b12e-e4083c274a67-kube-api-access-xv7r6\") pod \"placement-db-sync-mkdm4\" (UID: \"6e23be1c-6ab2-442e-b12e-e4083c274a67\") " pod="openstack/placement-db-sync-mkdm4" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.207058 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-559945cfdf-qwr2k"] Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.208746 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-559945cfdf-qwr2k" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.210416 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-mkdm4" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.227860 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-s2th9"] Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.249466 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f1636f31-13c4-4745-88a2-20ff71e46358-horizon-secret-key\") pod \"horizon-559945cfdf-qwr2k\" (UID: \"f1636f31-13c4-4745-88a2-20ff71e46358\") " pod="openstack/horizon-559945cfdf-qwr2k" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.249514 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1636f31-13c4-4745-88a2-20ff71e46358-logs\") pod \"horizon-559945cfdf-qwr2k\" (UID: \"f1636f31-13c4-4745-88a2-20ff71e46358\") " pod="openstack/horizon-559945cfdf-qwr2k" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.249546 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wm8j\" (UniqueName: \"kubernetes.io/projected/f1636f31-13c4-4745-88a2-20ff71e46358-kube-api-access-8wm8j\") pod \"horizon-559945cfdf-qwr2k\" (UID: \"f1636f31-13c4-4745-88a2-20ff71e46358\") " pod="openstack/horizon-559945cfdf-qwr2k" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.249587 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1636f31-13c4-4745-88a2-20ff71e46358-config-data\") pod \"horizon-559945cfdf-qwr2k\" (UID: \"f1636f31-13c4-4745-88a2-20ff71e46358\") " pod="openstack/horizon-559945cfdf-qwr2k" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.249671 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1636f31-13c4-4745-88a2-20ff71e46358-scripts\") pod \"horizon-559945cfdf-qwr2k\" (UID: \"f1636f31-13c4-4745-88a2-20ff71e46358\") " pod="openstack/horizon-559945cfdf-qwr2k" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.256010 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-559945cfdf-qwr2k"] Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.279588 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-hqk9s"] Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.320073 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-gzgp6"] Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.340706 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-gzgp6" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.352834 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1636f31-13c4-4745-88a2-20ff71e46358-scripts\") pod \"horizon-559945cfdf-qwr2k\" (UID: \"f1636f31-13c4-4745-88a2-20ff71e46358\") " pod="openstack/horizon-559945cfdf-qwr2k" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.352998 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f1636f31-13c4-4745-88a2-20ff71e46358-horizon-secret-key\") pod \"horizon-559945cfdf-qwr2k\" (UID: \"f1636f31-13c4-4745-88a2-20ff71e46358\") " pod="openstack/horizon-559945cfdf-qwr2k" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.353032 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1636f31-13c4-4745-88a2-20ff71e46358-logs\") pod \"horizon-559945cfdf-qwr2k\" (UID: \"f1636f31-13c4-4745-88a2-20ff71e46358\") " pod="openstack/horizon-559945cfdf-qwr2k" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.353051 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wm8j\" (UniqueName: \"kubernetes.io/projected/f1636f31-13c4-4745-88a2-20ff71e46358-kube-api-access-8wm8j\") pod \"horizon-559945cfdf-qwr2k\" (UID: \"f1636f31-13c4-4745-88a2-20ff71e46358\") " pod="openstack/horizon-559945cfdf-qwr2k" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.353107 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1636f31-13c4-4745-88a2-20ff71e46358-config-data\") pod \"horizon-559945cfdf-qwr2k\" (UID: \"f1636f31-13c4-4745-88a2-20ff71e46358\") " pod="openstack/horizon-559945cfdf-qwr2k" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.354784 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1636f31-13c4-4745-88a2-20ff71e46358-scripts\") pod \"horizon-559945cfdf-qwr2k\" (UID: \"f1636f31-13c4-4745-88a2-20ff71e46358\") " pod="openstack/horizon-559945cfdf-qwr2k" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.359566 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"1c3512ca-ae09-4231-920d-7b0694041097\") " pod="openstack/glance-default-external-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.371441 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1636f31-13c4-4745-88a2-20ff71e46358-logs\") pod \"horizon-559945cfdf-qwr2k\" (UID: \"f1636f31-13c4-4745-88a2-20ff71e46358\") " pod="openstack/horizon-559945cfdf-qwr2k" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.372154 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1636f31-13c4-4745-88a2-20ff71e46358-config-data\") pod \"horizon-559945cfdf-qwr2k\" (UID: \"f1636f31-13c4-4745-88a2-20ff71e46358\") " pod="openstack/horizon-559945cfdf-qwr2k" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.372227 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-gzgp6"] Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.383402 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f1636f31-13c4-4745-88a2-20ff71e46358-horizon-secret-key\") pod \"horizon-559945cfdf-qwr2k\" (UID: \"f1636f31-13c4-4745-88a2-20ff71e46358\") " pod="openstack/horizon-559945cfdf-qwr2k" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.397089 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-hqk9s"] Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.431223 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.434192 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.436719 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.456291 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0e678e9-3d71-4c27-a179-f4fdd1515701-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-gzgp6\" (UID: \"a0e678e9-3d71-4c27-a179-f4fdd1515701\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gzgp6" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.456718 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6ps4\" (UniqueName: \"kubernetes.io/projected/a0e678e9-3d71-4c27-a179-f4fdd1515701-kube-api-access-x6ps4\") pod \"dnsmasq-dns-785d8bcb8c-gzgp6\" (UID: \"a0e678e9-3d71-4c27-a179-f4fdd1515701\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gzgp6" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.456744 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0e678e9-3d71-4c27-a179-f4fdd1515701-config\") pod \"dnsmasq-dns-785d8bcb8c-gzgp6\" (UID: \"a0e678e9-3d71-4c27-a179-f4fdd1515701\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gzgp6" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.470802 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a0e678e9-3d71-4c27-a179-f4fdd1515701-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-gzgp6\" (UID: \"a0e678e9-3d71-4c27-a179-f4fdd1515701\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gzgp6" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.470905 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a0e678e9-3d71-4c27-a179-f4fdd1515701-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-gzgp6\" (UID: \"a0e678e9-3d71-4c27-a179-f4fdd1515701\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gzgp6" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.470942 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a0e678e9-3d71-4c27-a179-f4fdd1515701-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-gzgp6\" (UID: \"a0e678e9-3d71-4c27-a179-f4fdd1515701\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gzgp6" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.494294 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wm8j\" (UniqueName: \"kubernetes.io/projected/f1636f31-13c4-4745-88a2-20ff71e46358-kube-api-access-8wm8j\") pod \"horizon-559945cfdf-qwr2k\" (UID: \"f1636f31-13c4-4745-88a2-20ff71e46358\") " pod="openstack/horizon-559945cfdf-qwr2k" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.497164 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.550368 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-559945cfdf-qwr2k" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.556454 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.572673 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0e678e9-3d71-4c27-a179-f4fdd1515701-config\") pod \"dnsmasq-dns-785d8bcb8c-gzgp6\" (UID: \"a0e678e9-3d71-4c27-a179-f4fdd1515701\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gzgp6" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.572861 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-797fb\" (UniqueName: \"kubernetes.io/projected/5ed264cf-9f83-47d5-8291-95ee4838616f-kube-api-access-797fb\") pod \"glance-default-internal-api-0\" (UID: \"5ed264cf-9f83-47d5-8291-95ee4838616f\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.572977 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ed264cf-9f83-47d5-8291-95ee4838616f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5ed264cf-9f83-47d5-8291-95ee4838616f\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.573047 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ed264cf-9f83-47d5-8291-95ee4838616f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5ed264cf-9f83-47d5-8291-95ee4838616f\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.573123 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5ed264cf-9f83-47d5-8291-95ee4838616f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5ed264cf-9f83-47d5-8291-95ee4838616f\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.573225 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ed264cf-9f83-47d5-8291-95ee4838616f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5ed264cf-9f83-47d5-8291-95ee4838616f\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.573299 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a0e678e9-3d71-4c27-a179-f4fdd1515701-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-gzgp6\" (UID: \"a0e678e9-3d71-4c27-a179-f4fdd1515701\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gzgp6" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.573381 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a0e678e9-3d71-4c27-a179-f4fdd1515701-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-gzgp6\" (UID: \"a0e678e9-3d71-4c27-a179-f4fdd1515701\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gzgp6" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.573494 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a0e678e9-3d71-4c27-a179-f4fdd1515701-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-gzgp6\" (UID: \"a0e678e9-3d71-4c27-a179-f4fdd1515701\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gzgp6" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.573601 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ed264cf-9f83-47d5-8291-95ee4838616f-logs\") pod \"glance-default-internal-api-0\" (UID: \"5ed264cf-9f83-47d5-8291-95ee4838616f\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.573701 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0e678e9-3d71-4c27-a179-f4fdd1515701-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-gzgp6\" (UID: \"a0e678e9-3d71-4c27-a179-f4fdd1515701\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gzgp6" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.573785 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6ps4\" (UniqueName: \"kubernetes.io/projected/a0e678e9-3d71-4c27-a179-f4fdd1515701-kube-api-access-x6ps4\") pod \"dnsmasq-dns-785d8bcb8c-gzgp6\" (UID: \"a0e678e9-3d71-4c27-a179-f4fdd1515701\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gzgp6" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.573890 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"5ed264cf-9f83-47d5-8291-95ee4838616f\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.576280 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a0e678e9-3d71-4c27-a179-f4fdd1515701-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-gzgp6\" (UID: \"a0e678e9-3d71-4c27-a179-f4fdd1515701\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gzgp6" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.576903 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a0e678e9-3d71-4c27-a179-f4fdd1515701-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-gzgp6\" (UID: \"a0e678e9-3d71-4c27-a179-f4fdd1515701\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gzgp6" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.577419 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a0e678e9-3d71-4c27-a179-f4fdd1515701-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-gzgp6\" (UID: \"a0e678e9-3d71-4c27-a179-f4fdd1515701\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gzgp6" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.578046 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0e678e9-3d71-4c27-a179-f4fdd1515701-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-gzgp6\" (UID: \"a0e678e9-3d71-4c27-a179-f4fdd1515701\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gzgp6" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.594736 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0e678e9-3d71-4c27-a179-f4fdd1515701-config\") pod \"dnsmasq-dns-785d8bcb8c-gzgp6\" (UID: \"a0e678e9-3d71-4c27-a179-f4fdd1515701\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gzgp6" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.605893 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6ps4\" (UniqueName: \"kubernetes.io/projected/a0e678e9-3d71-4c27-a179-f4fdd1515701-kube-api-access-x6ps4\") pod \"dnsmasq-dns-785d8bcb8c-gzgp6\" (UID: \"a0e678e9-3d71-4c27-a179-f4fdd1515701\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gzgp6" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.680189 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-797fb\" (UniqueName: \"kubernetes.io/projected/5ed264cf-9f83-47d5-8291-95ee4838616f-kube-api-access-797fb\") pod \"glance-default-internal-api-0\" (UID: \"5ed264cf-9f83-47d5-8291-95ee4838616f\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.680281 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ed264cf-9f83-47d5-8291-95ee4838616f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5ed264cf-9f83-47d5-8291-95ee4838616f\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.680308 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ed264cf-9f83-47d5-8291-95ee4838616f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5ed264cf-9f83-47d5-8291-95ee4838616f\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.680363 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5ed264cf-9f83-47d5-8291-95ee4838616f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5ed264cf-9f83-47d5-8291-95ee4838616f\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.680435 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ed264cf-9f83-47d5-8291-95ee4838616f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5ed264cf-9f83-47d5-8291-95ee4838616f\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.683827 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ed264cf-9f83-47d5-8291-95ee4838616f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5ed264cf-9f83-47d5-8291-95ee4838616f\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.688390 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ed264cf-9f83-47d5-8291-95ee4838616f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5ed264cf-9f83-47d5-8291-95ee4838616f\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.693012 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ed264cf-9f83-47d5-8291-95ee4838616f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5ed264cf-9f83-47d5-8291-95ee4838616f\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.705541 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5ed264cf-9f83-47d5-8291-95ee4838616f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5ed264cf-9f83-47d5-8291-95ee4838616f\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.705783 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ed264cf-9f83-47d5-8291-95ee4838616f-logs\") pod \"glance-default-internal-api-0\" (UID: \"5ed264cf-9f83-47d5-8291-95ee4838616f\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.705907 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"5ed264cf-9f83-47d5-8291-95ee4838616f\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.707211 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ed264cf-9f83-47d5-8291-95ee4838616f-logs\") pod \"glance-default-internal-api-0\" (UID: \"5ed264cf-9f83-47d5-8291-95ee4838616f\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.707944 4667 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"5ed264cf-9f83-47d5-8291-95ee4838616f\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.728104 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-797fb\" (UniqueName: \"kubernetes.io/projected/5ed264cf-9f83-47d5-8291-95ee4838616f-kube-api-access-797fb\") pod \"glance-default-internal-api-0\" (UID: \"5ed264cf-9f83-47d5-8291-95ee4838616f\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.787461 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"5ed264cf-9f83-47d5-8291-95ee4838616f\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.841367 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-gzgp6" Jan 31 04:06:40 crc kubenswrapper[4667]: I0131 04:06:40.861505 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 04:06:41 crc kubenswrapper[4667]: I0131 04:06:41.021029 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6b97688f77-kkzs5"] Jan 31 04:06:41 crc kubenswrapper[4667]: I0131 04:06:41.039257 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:06:41 crc kubenswrapper[4667]: W0131 04:06:41.049562 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15d05b5e_da40_49f4_8556_f6a192a9f776.slice/crio-c5a634f4cbe1364fe2f96bfdb122bc8687f895e984f30e46f3fa058ddf8646c0 WatchSource:0}: Error finding container c5a634f4cbe1364fe2f96bfdb122bc8687f895e984f30e46f3fa058ddf8646c0: Status 404 returned error can't find the container with id c5a634f4cbe1364fe2f96bfdb122bc8687f895e984f30e46f3fa058ddf8646c0 Jan 31 04:06:41 crc kubenswrapper[4667]: I0131 04:06:41.079591 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-kk7qf" Jan 31 04:06:41 crc kubenswrapper[4667]: I0131 04:06:41.208223 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c175848a-4645-42e7-8ccc-ab873e1ff7aa","Type":"ContainerStarted","Data":"f4b53ed96e656c378650b2049ae52b412d6038c17fdb0792effdbccfc9a88b45"} Jan 31 04:06:41 crc kubenswrapper[4667]: I0131 04:06:41.211936 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-245c9"] Jan 31 04:06:41 crc kubenswrapper[4667]: I0131 04:06:41.214628 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-hqk9s" event={"ID":"db92e124-da3b-48d4-af73-868133cb57ba","Type":"ContainerStarted","Data":"e2357ec0c61adf801406b00cd44ca8b8cd8fb2679fe1f04ddd3c297e5a87ca49"} Jan 31 04:06:41 crc kubenswrapper[4667]: I0131 04:06:41.214689 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-hqk9s" event={"ID":"db92e124-da3b-48d4-af73-868133cb57ba","Type":"ContainerStarted","Data":"f5c3218f2f00f36f4e659d6ea2d957822376281f4a9a3ef5f8248886a8eac0f6"} Jan 31 04:06:41 crc kubenswrapper[4667]: I0131 04:06:41.214864 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-847c4cc679-hqk9s" podUID="db92e124-da3b-48d4-af73-868133cb57ba" containerName="init" containerID="cri-o://e2357ec0c61adf801406b00cd44ca8b8cd8fb2679fe1f04ddd3c297e5a87ca49" gracePeriod=10 Jan 31 04:06:41 crc kubenswrapper[4667]: I0131 04:06:41.221439 4667 generic.go:334] "Generic (PLEG): container finished" podID="70574dc6-d369-4606-b5f3-b5173c354d7a" containerID="fff18d44740ab99af923a4dd78bc1d7ce61c5556feed21280bba43454fd45416" exitCode=0 Jan 31 04:06:41 crc kubenswrapper[4667]: I0131 04:06:41.221575 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-kk7qf" Jan 31 04:06:41 crc kubenswrapper[4667]: I0131 04:06:41.222155 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-kk7qf" event={"ID":"70574dc6-d369-4606-b5f3-b5173c354d7a","Type":"ContainerDied","Data":"fff18d44740ab99af923a4dd78bc1d7ce61c5556feed21280bba43454fd45416"} Jan 31 04:06:41 crc kubenswrapper[4667]: I0131 04:06:41.222200 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-kk7qf" event={"ID":"70574dc6-d369-4606-b5f3-b5173c354d7a","Type":"ContainerDied","Data":"69f2b79d3936fdb1ac8627bdeed780d0ed8a92fcbd350d271e93b257dcb0ee6e"} Jan 31 04:06:41 crc kubenswrapper[4667]: I0131 04:06:41.222220 4667 scope.go:117] "RemoveContainer" containerID="fff18d44740ab99af923a4dd78bc1d7ce61c5556feed21280bba43454fd45416" Jan 31 04:06:41 crc kubenswrapper[4667]: I0131 04:06:41.235657 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-s2th9" event={"ID":"6f4b88f0-e927-43e5-941e-1c431fb7269c","Type":"ContainerStarted","Data":"08193dbd4c1e145245c6f8205aa1840ad42db04eb19e0fcf4fd16878230e64b9"} Jan 31 04:06:41 crc kubenswrapper[4667]: I0131 04:06:41.235729 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-s2th9" event={"ID":"6f4b88f0-e927-43e5-941e-1c431fb7269c","Type":"ContainerStarted","Data":"851144a7c390d03824d9374acb517e9649a5020b4c625ab784a67d3a76e46df1"} Jan 31 04:06:41 crc kubenswrapper[4667]: I0131 04:06:41.244186 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70574dc6-d369-4606-b5f3-b5173c354d7a-dns-svc\") pod \"70574dc6-d369-4606-b5f3-b5173c354d7a\" (UID: \"70574dc6-d369-4606-b5f3-b5173c354d7a\") " Jan 31 04:06:41 crc kubenswrapper[4667]: I0131 04:06:41.244299 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70574dc6-d369-4606-b5f3-b5173c354d7a-ovsdbserver-nb\") pod \"70574dc6-d369-4606-b5f3-b5173c354d7a\" (UID: \"70574dc6-d369-4606-b5f3-b5173c354d7a\") " Jan 31 04:06:41 crc kubenswrapper[4667]: I0131 04:06:41.244360 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8r25\" (UniqueName: \"kubernetes.io/projected/70574dc6-d369-4606-b5f3-b5173c354d7a-kube-api-access-z8r25\") pod \"70574dc6-d369-4606-b5f3-b5173c354d7a\" (UID: \"70574dc6-d369-4606-b5f3-b5173c354d7a\") " Jan 31 04:06:41 crc kubenswrapper[4667]: I0131 04:06:41.244408 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70574dc6-d369-4606-b5f3-b5173c354d7a-config\") pod \"70574dc6-d369-4606-b5f3-b5173c354d7a\" (UID: \"70574dc6-d369-4606-b5f3-b5173c354d7a\") " Jan 31 04:06:41 crc kubenswrapper[4667]: I0131 04:06:41.244476 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70574dc6-d369-4606-b5f3-b5173c354d7a-ovsdbserver-sb\") pod \"70574dc6-d369-4606-b5f3-b5173c354d7a\" (UID: \"70574dc6-d369-4606-b5f3-b5173c354d7a\") " Jan 31 04:06:41 crc kubenswrapper[4667]: I0131 04:06:41.244507 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/70574dc6-d369-4606-b5f3-b5173c354d7a-dns-swift-storage-0\") pod \"70574dc6-d369-4606-b5f3-b5173c354d7a\" (UID: \"70574dc6-d369-4606-b5f3-b5173c354d7a\") " Jan 31 04:06:41 crc kubenswrapper[4667]: I0131 04:06:41.258612 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-vvkc5"] Jan 31 04:06:41 crc kubenswrapper[4667]: I0131 04:06:41.259570 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b97688f77-kkzs5" event={"ID":"15d05b5e-da40-49f4-8556-f6a192a9f776","Type":"ContainerStarted","Data":"c5a634f4cbe1364fe2f96bfdb122bc8687f895e984f30e46f3fa058ddf8646c0"} Jan 31 04:06:41 crc kubenswrapper[4667]: I0131 04:06:41.287422 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70574dc6-d369-4606-b5f3-b5173c354d7a-kube-api-access-z8r25" (OuterVolumeSpecName: "kube-api-access-z8r25") pod "70574dc6-d369-4606-b5f3-b5173c354d7a" (UID: "70574dc6-d369-4606-b5f3-b5173c354d7a"). InnerVolumeSpecName "kube-api-access-z8r25". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:06:41 crc kubenswrapper[4667]: I0131 04:06:41.357642 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-s2th9" podStartSLOduration=3.357621668 podStartE2EDuration="3.357621668s" podCreationTimestamp="2026-01-31 04:06:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:06:41.286164354 +0000 UTC m=+1124.802499663" watchObservedRunningTime="2026-01-31 04:06:41.357621668 +0000 UTC m=+1124.873957047" Jan 31 04:06:41 crc kubenswrapper[4667]: I0131 04:06:41.363812 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8r25\" (UniqueName: \"kubernetes.io/projected/70574dc6-d369-4606-b5f3-b5173c354d7a-kube-api-access-z8r25\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:41 crc kubenswrapper[4667]: I0131 04:06:41.366366 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70574dc6-d369-4606-b5f3-b5173c354d7a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "70574dc6-d369-4606-b5f3-b5173c354d7a" (UID: "70574dc6-d369-4606-b5f3-b5173c354d7a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:06:41 crc kubenswrapper[4667]: I0131 04:06:41.374017 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70574dc6-d369-4606-b5f3-b5173c354d7a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "70574dc6-d369-4606-b5f3-b5173c354d7a" (UID: "70574dc6-d369-4606-b5f3-b5173c354d7a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:06:41 crc kubenswrapper[4667]: I0131 04:06:41.374538 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70574dc6-d369-4606-b5f3-b5173c354d7a-config" (OuterVolumeSpecName: "config") pod "70574dc6-d369-4606-b5f3-b5173c354d7a" (UID: "70574dc6-d369-4606-b5f3-b5173c354d7a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:06:41 crc kubenswrapper[4667]: I0131 04:06:41.374689 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70574dc6-d369-4606-b5f3-b5173c354d7a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "70574dc6-d369-4606-b5f3-b5173c354d7a" (UID: "70574dc6-d369-4606-b5f3-b5173c354d7a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:06:41 crc kubenswrapper[4667]: I0131 04:06:41.406459 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70574dc6-d369-4606-b5f3-b5173c354d7a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "70574dc6-d369-4606-b5f3-b5173c354d7a" (UID: "70574dc6-d369-4606-b5f3-b5173c354d7a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:06:41 crc kubenswrapper[4667]: I0131 04:06:41.469131 4667 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70574dc6-d369-4606-b5f3-b5173c354d7a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:41 crc kubenswrapper[4667]: I0131 04:06:41.469161 4667 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70574dc6-d369-4606-b5f3-b5173c354d7a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:41 crc kubenswrapper[4667]: I0131 04:06:41.469172 4667 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70574dc6-d369-4606-b5f3-b5173c354d7a-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:41 crc kubenswrapper[4667]: I0131 04:06:41.469180 4667 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70574dc6-d369-4606-b5f3-b5173c354d7a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:41 crc kubenswrapper[4667]: I0131 04:06:41.469190 4667 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/70574dc6-d369-4606-b5f3-b5173c354d7a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:41 crc kubenswrapper[4667]: I0131 04:06:41.499056 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-mkdm4"] Jan 31 04:06:41 crc kubenswrapper[4667]: I0131 04:06:41.521182 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-4nj2p"] Jan 31 04:06:41 crc kubenswrapper[4667]: I0131 04:06:41.525431 4667 scope.go:117] "RemoveContainer" containerID="fff18d44740ab99af923a4dd78bc1d7ce61c5556feed21280bba43454fd45416" Jan 31 04:06:41 crc kubenswrapper[4667]: E0131 04:06:41.526550 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fff18d44740ab99af923a4dd78bc1d7ce61c5556feed21280bba43454fd45416\": container with ID starting with fff18d44740ab99af923a4dd78bc1d7ce61c5556feed21280bba43454fd45416 not found: ID does not exist" containerID="fff18d44740ab99af923a4dd78bc1d7ce61c5556feed21280bba43454fd45416" Jan 31 04:06:41 crc kubenswrapper[4667]: I0131 04:06:41.526583 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fff18d44740ab99af923a4dd78bc1d7ce61c5556feed21280bba43454fd45416"} err="failed to get container status \"fff18d44740ab99af923a4dd78bc1d7ce61c5556feed21280bba43454fd45416\": rpc error: code = NotFound desc = could not find container \"fff18d44740ab99af923a4dd78bc1d7ce61c5556feed21280bba43454fd45416\": container with ID starting with fff18d44740ab99af923a4dd78bc1d7ce61c5556feed21280bba43454fd45416 not found: ID does not exist" Jan 31 04:06:41 crc kubenswrapper[4667]: I0131 04:06:41.646740 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-kk7qf"] Jan 31 04:06:41 crc kubenswrapper[4667]: I0131 04:06:41.660503 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-kk7qf"] Jan 31 04:06:41 crc kubenswrapper[4667]: I0131 04:06:41.906195 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-559945cfdf-qwr2k"] Jan 31 04:06:41 crc kubenswrapper[4667]: I0131 04:06:41.951298 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-gzgp6"] Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.026180 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-hqk9s" Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.047743 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.157440 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.189555 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.195102 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5k4kx\" (UniqueName: \"kubernetes.io/projected/db92e124-da3b-48d4-af73-868133cb57ba-kube-api-access-5k4kx\") pod \"db92e124-da3b-48d4-af73-868133cb57ba\" (UID: \"db92e124-da3b-48d4-af73-868133cb57ba\") " Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.195340 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db92e124-da3b-48d4-af73-868133cb57ba-dns-svc\") pod \"db92e124-da3b-48d4-af73-868133cb57ba\" (UID: \"db92e124-da3b-48d4-af73-868133cb57ba\") " Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.195384 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db92e124-da3b-48d4-af73-868133cb57ba-config\") pod \"db92e124-da3b-48d4-af73-868133cb57ba\" (UID: \"db92e124-da3b-48d4-af73-868133cb57ba\") " Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.195440 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db92e124-da3b-48d4-af73-868133cb57ba-ovsdbserver-sb\") pod \"db92e124-da3b-48d4-af73-868133cb57ba\" (UID: \"db92e124-da3b-48d4-af73-868133cb57ba\") " Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.195520 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/db92e124-da3b-48d4-af73-868133cb57ba-dns-swift-storage-0\") pod \"db92e124-da3b-48d4-af73-868133cb57ba\" (UID: \"db92e124-da3b-48d4-af73-868133cb57ba\") " Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.195634 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db92e124-da3b-48d4-af73-868133cb57ba-ovsdbserver-nb\") pod \"db92e124-da3b-48d4-af73-868133cb57ba\" (UID: \"db92e124-da3b-48d4-af73-868133cb57ba\") " Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.256310 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db92e124-da3b-48d4-af73-868133cb57ba-kube-api-access-5k4kx" (OuterVolumeSpecName: "kube-api-access-5k4kx") pod "db92e124-da3b-48d4-af73-868133cb57ba" (UID: "db92e124-da3b-48d4-af73-868133cb57ba"). InnerVolumeSpecName "kube-api-access-5k4kx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.262566 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db92e124-da3b-48d4-af73-868133cb57ba-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "db92e124-da3b-48d4-af73-868133cb57ba" (UID: "db92e124-da3b-48d4-af73-868133cb57ba"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.297626 4667 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db92e124-da3b-48d4-af73-868133cb57ba-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.297654 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5k4kx\" (UniqueName: \"kubernetes.io/projected/db92e124-da3b-48d4-af73-868133cb57ba-kube-api-access-5k4kx\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.310223 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db92e124-da3b-48d4-af73-868133cb57ba-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "db92e124-da3b-48d4-af73-868133cb57ba" (UID: "db92e124-da3b-48d4-af73-868133cb57ba"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.373945 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.399620 4667 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/db92e124-da3b-48d4-af73-868133cb57ba-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.480153 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db92e124-da3b-48d4-af73-868133cb57ba-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "db92e124-da3b-48d4-af73-868133cb57ba" (UID: "db92e124-da3b-48d4-af73-868133cb57ba"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.511054 4667 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db92e124-da3b-48d4-af73-868133cb57ba-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.518469 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db92e124-da3b-48d4-af73-868133cb57ba-config" (OuterVolumeSpecName: "config") pod "db92e124-da3b-48d4-af73-868133cb57ba" (UID: "db92e124-da3b-48d4-af73-868133cb57ba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.519967 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db92e124-da3b-48d4-af73-868133cb57ba-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "db92e124-da3b-48d4-af73-868133cb57ba" (UID: "db92e124-da3b-48d4-af73-868133cb57ba"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.552263 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6b97688f77-kkzs5"] Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.571570 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vvkc5" event={"ID":"b58d4b49-fb58-480e-9a43-2675ce1fc0c1","Type":"ContainerStarted","Data":"49e6db4ed43ae57cf2c262faf58a2c2951eec05b799f08020e9a3f1a594aeac7"} Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.571633 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vvkc5" event={"ID":"b58d4b49-fb58-480e-9a43-2675ce1fc0c1","Type":"ContainerStarted","Data":"00bd4d030dac7cc4f0362e2625d7c88f23ab4437b49e06790fc9a23801857411"} Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.632984 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-245c9" event={"ID":"bc9db8ae-2f60-4efd-9a11-4aac5f336900","Type":"ContainerStarted","Data":"33743d1499cdabcf6f82930a3464c7d322aa057fd1e661d13bba920192c5232c"} Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.634323 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-gzgp6" event={"ID":"a0e678e9-3d71-4c27-a179-f4fdd1515701","Type":"ContainerStarted","Data":"4be911f990fe2db6f473ef6b343a6f9c56c8b730a803a8d290f6b3a4b72c0ba8"} Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.635831 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1c3512ca-ae09-4231-920d-7b0694041097","Type":"ContainerStarted","Data":"4ec50165be40949f73f11ad3b6b144b53a953cbad0d48cc1bc12719cfe53c876"} Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.639170 4667 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db92e124-da3b-48d4-af73-868133cb57ba-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.639215 4667 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db92e124-da3b-48d4-af73-868133cb57ba-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.658552 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4nj2p" event={"ID":"7b6bac61-1103-438b-9e75-f3d6b6902270","Type":"ContainerStarted","Data":"8a13527100341ec328f0ca96a57e3e425c0751023ece72646847c5d0b46270d3"} Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.662769 4667 generic.go:334] "Generic (PLEG): container finished" podID="db92e124-da3b-48d4-af73-868133cb57ba" containerID="e2357ec0c61adf801406b00cd44ca8b8cd8fb2679fe1f04ddd3c297e5a87ca49" exitCode=0 Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.662838 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-hqk9s" event={"ID":"db92e124-da3b-48d4-af73-868133cb57ba","Type":"ContainerDied","Data":"e2357ec0c61adf801406b00cd44ca8b8cd8fb2679fe1f04ddd3c297e5a87ca49"} Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.662885 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-hqk9s" event={"ID":"db92e124-da3b-48d4-af73-868133cb57ba","Type":"ContainerDied","Data":"f5c3218f2f00f36f4e659d6ea2d957822376281f4a9a3ef5f8248886a8eac0f6"} Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.662917 4667 scope.go:117] "RemoveContainer" containerID="e2357ec0c61adf801406b00cd44ca8b8cd8fb2679fe1f04ddd3c297e5a87ca49" Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.663097 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-hqk9s" Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.677559 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-559945cfdf-qwr2k" event={"ID":"f1636f31-13c4-4745-88a2-20ff71e46358","Type":"ContainerStarted","Data":"1f23cd9deaf2930e302cddb559d39075628ec54f8220249420db03a7015b7932"} Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.679505 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-mkdm4" event={"ID":"6e23be1c-6ab2-442e-b12e-e4083c274a67","Type":"ContainerStarted","Data":"5fca837a3b11257d244086a66e6c594919dfc96073c003b183444fc3f6002205"} Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.712891 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.738378 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-85bb8b576c-j4h8q"] Jan 31 04:06:42 crc kubenswrapper[4667]: E0131 04:06:42.738942 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db92e124-da3b-48d4-af73-868133cb57ba" containerName="init" Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.738958 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="db92e124-da3b-48d4-af73-868133cb57ba" containerName="init" Jan 31 04:06:42 crc kubenswrapper[4667]: E0131 04:06:42.738992 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70574dc6-d369-4606-b5f3-b5173c354d7a" containerName="init" Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.739000 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="70574dc6-d369-4606-b5f3-b5173c354d7a" containerName="init" Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.739234 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="70574dc6-d369-4606-b5f3-b5173c354d7a" containerName="init" Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.739259 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="db92e124-da3b-48d4-af73-868133cb57ba" containerName="init" Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.741145 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-85bb8b576c-j4h8q" Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.748169 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-85bb8b576c-j4h8q"] Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.766142 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-vvkc5" podStartSLOduration=3.766110692 podStartE2EDuration="3.766110692s" podCreationTimestamp="2026-01-31 04:06:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:06:42.603290957 +0000 UTC m=+1126.119626256" watchObservedRunningTime="2026-01-31 04:06:42.766110692 +0000 UTC m=+1126.282445991" Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.797044 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-hqk9s"] Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.805713 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-hqk9s"] Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.847666 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec50b0b7-497c-4bd1-a031-007dbd616e3c-config-data\") pod \"horizon-85bb8b576c-j4h8q\" (UID: \"ec50b0b7-497c-4bd1-a031-007dbd616e3c\") " pod="openstack/horizon-85bb8b576c-j4h8q" Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.848180 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkxc2\" (UniqueName: \"kubernetes.io/projected/ec50b0b7-497c-4bd1-a031-007dbd616e3c-kube-api-access-vkxc2\") pod \"horizon-85bb8b576c-j4h8q\" (UID: \"ec50b0b7-497c-4bd1-a031-007dbd616e3c\") " pod="openstack/horizon-85bb8b576c-j4h8q" Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.851228 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec50b0b7-497c-4bd1-a031-007dbd616e3c-logs\") pod \"horizon-85bb8b576c-j4h8q\" (UID: \"ec50b0b7-497c-4bd1-a031-007dbd616e3c\") " pod="openstack/horizon-85bb8b576c-j4h8q" Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.851260 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec50b0b7-497c-4bd1-a031-007dbd616e3c-scripts\") pod \"horizon-85bb8b576c-j4h8q\" (UID: \"ec50b0b7-497c-4bd1-a031-007dbd616e3c\") " pod="openstack/horizon-85bb8b576c-j4h8q" Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.851315 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ec50b0b7-497c-4bd1-a031-007dbd616e3c-horizon-secret-key\") pod \"horizon-85bb8b576c-j4h8q\" (UID: \"ec50b0b7-497c-4bd1-a031-007dbd616e3c\") " pod="openstack/horizon-85bb8b576c-j4h8q" Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.926571 4667 scope.go:117] "RemoveContainer" containerID="e2357ec0c61adf801406b00cd44ca8b8cd8fb2679fe1f04ddd3c297e5a87ca49" Jan 31 04:06:42 crc kubenswrapper[4667]: E0131 04:06:42.930906 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2357ec0c61adf801406b00cd44ca8b8cd8fb2679fe1f04ddd3c297e5a87ca49\": container with ID starting with e2357ec0c61adf801406b00cd44ca8b8cd8fb2679fe1f04ddd3c297e5a87ca49 not found: ID does not exist" containerID="e2357ec0c61adf801406b00cd44ca8b8cd8fb2679fe1f04ddd3c297e5a87ca49" Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.930949 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2357ec0c61adf801406b00cd44ca8b8cd8fb2679fe1f04ddd3c297e5a87ca49"} err="failed to get container status \"e2357ec0c61adf801406b00cd44ca8b8cd8fb2679fe1f04ddd3c297e5a87ca49\": rpc error: code = NotFound desc = could not find container \"e2357ec0c61adf801406b00cd44ca8b8cd8fb2679fe1f04ddd3c297e5a87ca49\": container with ID starting with e2357ec0c61adf801406b00cd44ca8b8cd8fb2679fe1f04ddd3c297e5a87ca49 not found: ID does not exist" Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.952562 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec50b0b7-497c-4bd1-a031-007dbd616e3c-config-data\") pod \"horizon-85bb8b576c-j4h8q\" (UID: \"ec50b0b7-497c-4bd1-a031-007dbd616e3c\") " pod="openstack/horizon-85bb8b576c-j4h8q" Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.952605 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkxc2\" (UniqueName: \"kubernetes.io/projected/ec50b0b7-497c-4bd1-a031-007dbd616e3c-kube-api-access-vkxc2\") pod \"horizon-85bb8b576c-j4h8q\" (UID: \"ec50b0b7-497c-4bd1-a031-007dbd616e3c\") " pod="openstack/horizon-85bb8b576c-j4h8q" Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.952689 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec50b0b7-497c-4bd1-a031-007dbd616e3c-logs\") pod \"horizon-85bb8b576c-j4h8q\" (UID: \"ec50b0b7-497c-4bd1-a031-007dbd616e3c\") " pod="openstack/horizon-85bb8b576c-j4h8q" Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.952717 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec50b0b7-497c-4bd1-a031-007dbd616e3c-scripts\") pod \"horizon-85bb8b576c-j4h8q\" (UID: \"ec50b0b7-497c-4bd1-a031-007dbd616e3c\") " pod="openstack/horizon-85bb8b576c-j4h8q" Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.952747 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ec50b0b7-497c-4bd1-a031-007dbd616e3c-horizon-secret-key\") pod \"horizon-85bb8b576c-j4h8q\" (UID: \"ec50b0b7-497c-4bd1-a031-007dbd616e3c\") " pod="openstack/horizon-85bb8b576c-j4h8q" Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.954142 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec50b0b7-497c-4bd1-a031-007dbd616e3c-logs\") pod \"horizon-85bb8b576c-j4h8q\" (UID: \"ec50b0b7-497c-4bd1-a031-007dbd616e3c\") " pod="openstack/horizon-85bb8b576c-j4h8q" Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.954657 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec50b0b7-497c-4bd1-a031-007dbd616e3c-scripts\") pod \"horizon-85bb8b576c-j4h8q\" (UID: \"ec50b0b7-497c-4bd1-a031-007dbd616e3c\") " pod="openstack/horizon-85bb8b576c-j4h8q" Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.954754 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec50b0b7-497c-4bd1-a031-007dbd616e3c-config-data\") pod \"horizon-85bb8b576c-j4h8q\" (UID: \"ec50b0b7-497c-4bd1-a031-007dbd616e3c\") " pod="openstack/horizon-85bb8b576c-j4h8q" Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.962368 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ec50b0b7-497c-4bd1-a031-007dbd616e3c-horizon-secret-key\") pod \"horizon-85bb8b576c-j4h8q\" (UID: \"ec50b0b7-497c-4bd1-a031-007dbd616e3c\") " pod="openstack/horizon-85bb8b576c-j4h8q" Jan 31 04:06:42 crc kubenswrapper[4667]: I0131 04:06:42.995590 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkxc2\" (UniqueName: \"kubernetes.io/projected/ec50b0b7-497c-4bd1-a031-007dbd616e3c-kube-api-access-vkxc2\") pod \"horizon-85bb8b576c-j4h8q\" (UID: \"ec50b0b7-497c-4bd1-a031-007dbd616e3c\") " pod="openstack/horizon-85bb8b576c-j4h8q" Jan 31 04:06:43 crc kubenswrapper[4667]: I0131 04:06:43.099019 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-85bb8b576c-j4h8q" Jan 31 04:06:43 crc kubenswrapper[4667]: I0131 04:06:43.316442 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70574dc6-d369-4606-b5f3-b5173c354d7a" path="/var/lib/kubelet/pods/70574dc6-d369-4606-b5f3-b5173c354d7a/volumes" Jan 31 04:06:43 crc kubenswrapper[4667]: I0131 04:06:43.318428 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db92e124-da3b-48d4-af73-868133cb57ba" path="/var/lib/kubelet/pods/db92e124-da3b-48d4-af73-868133cb57ba/volumes" Jan 31 04:06:43 crc kubenswrapper[4667]: I0131 04:06:43.702107 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5ed264cf-9f83-47d5-8291-95ee4838616f","Type":"ContainerStarted","Data":"2ba19547fbd37d702bb18b0b8276e0b6780283f1f88d24bcd2db4a438285ff35"} Jan 31 04:06:43 crc kubenswrapper[4667]: I0131 04:06:43.713270 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-85bb8b576c-j4h8q"] Jan 31 04:06:43 crc kubenswrapper[4667]: I0131 04:06:43.717235 4667 generic.go:334] "Generic (PLEG): container finished" podID="a0e678e9-3d71-4c27-a179-f4fdd1515701" containerID="fdd1cd838e537db8ae2d9de10dfce44473176f3f68d9b971afbbf2fbcb3b8c56" exitCode=0 Jan 31 04:06:43 crc kubenswrapper[4667]: I0131 04:06:43.717308 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-gzgp6" event={"ID":"a0e678e9-3d71-4c27-a179-f4fdd1515701","Type":"ContainerDied","Data":"fdd1cd838e537db8ae2d9de10dfce44473176f3f68d9b971afbbf2fbcb3b8c56"} Jan 31 04:06:43 crc kubenswrapper[4667]: W0131 04:06:43.732654 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec50b0b7_497c_4bd1_a031_007dbd616e3c.slice/crio-e8415ebc17ff236763ea92519f48854b0cd6001803361ad6df09f684c16993d4 WatchSource:0}: Error finding container e8415ebc17ff236763ea92519f48854b0cd6001803361ad6df09f684c16993d4: Status 404 returned error can't find the container with id e8415ebc17ff236763ea92519f48854b0cd6001803361ad6df09f684c16993d4 Jan 31 04:06:44 crc kubenswrapper[4667]: I0131 04:06:44.742223 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-85bb8b576c-j4h8q" event={"ID":"ec50b0b7-497c-4bd1-a031-007dbd616e3c","Type":"ContainerStarted","Data":"e8415ebc17ff236763ea92519f48854b0cd6001803361ad6df09f684c16993d4"} Jan 31 04:06:44 crc kubenswrapper[4667]: I0131 04:06:44.748352 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5ed264cf-9f83-47d5-8291-95ee4838616f","Type":"ContainerStarted","Data":"000d672a56472d3246474cbd083ffa6d5135b5dfd347ea32f166923b80992033"} Jan 31 04:06:44 crc kubenswrapper[4667]: I0131 04:06:44.754184 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-gzgp6" event={"ID":"a0e678e9-3d71-4c27-a179-f4fdd1515701","Type":"ContainerStarted","Data":"cbd292d4994e7bb4f762b3480154df9034043365069ac633f8390e73438bd433"} Jan 31 04:06:44 crc kubenswrapper[4667]: I0131 04:06:44.754309 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-gzgp6" Jan 31 04:06:44 crc kubenswrapper[4667]: I0131 04:06:44.757298 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1c3512ca-ae09-4231-920d-7b0694041097","Type":"ContainerStarted","Data":"94750ae3e2a7775053bc506dba7c793f1f9b54228fe142b1885b84d78ccfe343"} Jan 31 04:06:45 crc kubenswrapper[4667]: I0131 04:06:45.790196 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1c3512ca-ae09-4231-920d-7b0694041097","Type":"ContainerStarted","Data":"0862f9f49ed7c9c42ddbfe2b9705819092f285174ad06a4b3a2acf308bb6b815"} Jan 31 04:06:45 crc kubenswrapper[4667]: I0131 04:06:45.793402 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1c3512ca-ae09-4231-920d-7b0694041097" containerName="glance-log" containerID="cri-o://94750ae3e2a7775053bc506dba7c793f1f9b54228fe142b1885b84d78ccfe343" gracePeriod=30 Jan 31 04:06:45 crc kubenswrapper[4667]: I0131 04:06:45.793464 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1c3512ca-ae09-4231-920d-7b0694041097" containerName="glance-httpd" containerID="cri-o://0862f9f49ed7c9c42ddbfe2b9705819092f285174ad06a4b3a2acf308bb6b815" gracePeriod=30 Jan 31 04:06:45 crc kubenswrapper[4667]: I0131 04:06:45.801970 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5ed264cf-9f83-47d5-8291-95ee4838616f","Type":"ContainerStarted","Data":"7b99f67ddc65ec7524cf9870677db218127e04e2e6679b99a2d7e70da0369518"} Jan 31 04:06:45 crc kubenswrapper[4667]: I0131 04:06:45.802606 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5ed264cf-9f83-47d5-8291-95ee4838616f" containerName="glance-httpd" containerID="cri-o://7b99f67ddc65ec7524cf9870677db218127e04e2e6679b99a2d7e70da0369518" gracePeriod=30 Jan 31 04:06:45 crc kubenswrapper[4667]: I0131 04:06:45.802631 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5ed264cf-9f83-47d5-8291-95ee4838616f" containerName="glance-log" containerID="cri-o://000d672a56472d3246474cbd083ffa6d5135b5dfd347ea32f166923b80992033" gracePeriod=30 Jan 31 04:06:45 crc kubenswrapper[4667]: I0131 04:06:45.838830 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.838805466 podStartE2EDuration="6.838805466s" podCreationTimestamp="2026-01-31 04:06:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:06:45.817470441 +0000 UTC m=+1129.333805760" watchObservedRunningTime="2026-01-31 04:06:45.838805466 +0000 UTC m=+1129.355140765" Jan 31 04:06:45 crc kubenswrapper[4667]: I0131 04:06:45.857191 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-gzgp6" podStartSLOduration=5.857160923 podStartE2EDuration="5.857160923s" podCreationTimestamp="2026-01-31 04:06:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:06:44.781793145 +0000 UTC m=+1128.298128454" watchObservedRunningTime="2026-01-31 04:06:45.857160923 +0000 UTC m=+1129.373496212" Jan 31 04:06:45 crc kubenswrapper[4667]: I0131 04:06:45.870272 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.870243969 podStartE2EDuration="6.870243969s" podCreationTimestamp="2026-01-31 04:06:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:06:45.866761957 +0000 UTC m=+1129.383097276" watchObservedRunningTime="2026-01-31 04:06:45.870243969 +0000 UTC m=+1129.386579268" Jan 31 04:06:45 crc kubenswrapper[4667]: E0131 04:06:45.925316 4667 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c3512ca_ae09_4231_920d_7b0694041097.slice/crio-0862f9f49ed7c9c42ddbfe2b9705819092f285174ad06a4b3a2acf308bb6b815.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ed264cf_9f83_47d5_8291_95ee4838616f.slice/crio-7b99f67ddc65ec7524cf9870677db218127e04e2e6679b99a2d7e70da0369518.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ed264cf_9f83_47d5_8291_95ee4838616f.slice/crio-000d672a56472d3246474cbd083ffa6d5135b5dfd347ea32f166923b80992033.scope\": RecentStats: unable to find data in memory cache]" Jan 31 04:06:46 crc kubenswrapper[4667]: I0131 04:06:46.849077 4667 generic.go:334] "Generic (PLEG): container finished" podID="5ed264cf-9f83-47d5-8291-95ee4838616f" containerID="7b99f67ddc65ec7524cf9870677db218127e04e2e6679b99a2d7e70da0369518" exitCode=143 Jan 31 04:06:46 crc kubenswrapper[4667]: I0131 04:06:46.849495 4667 generic.go:334] "Generic (PLEG): container finished" podID="5ed264cf-9f83-47d5-8291-95ee4838616f" containerID="000d672a56472d3246474cbd083ffa6d5135b5dfd347ea32f166923b80992033" exitCode=143 Jan 31 04:06:46 crc kubenswrapper[4667]: I0131 04:06:46.849592 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5ed264cf-9f83-47d5-8291-95ee4838616f","Type":"ContainerDied","Data":"7b99f67ddc65ec7524cf9870677db218127e04e2e6679b99a2d7e70da0369518"} Jan 31 04:06:46 crc kubenswrapper[4667]: I0131 04:06:46.849635 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5ed264cf-9f83-47d5-8291-95ee4838616f","Type":"ContainerDied","Data":"000d672a56472d3246474cbd083ffa6d5135b5dfd347ea32f166923b80992033"} Jan 31 04:06:46 crc kubenswrapper[4667]: I0131 04:06:46.853680 4667 generic.go:334] "Generic (PLEG): container finished" podID="1c3512ca-ae09-4231-920d-7b0694041097" containerID="0862f9f49ed7c9c42ddbfe2b9705819092f285174ad06a4b3a2acf308bb6b815" exitCode=143 Jan 31 04:06:46 crc kubenswrapper[4667]: I0131 04:06:46.853703 4667 generic.go:334] "Generic (PLEG): container finished" podID="1c3512ca-ae09-4231-920d-7b0694041097" containerID="94750ae3e2a7775053bc506dba7c793f1f9b54228fe142b1885b84d78ccfe343" exitCode=143 Jan 31 04:06:46 crc kubenswrapper[4667]: I0131 04:06:46.853731 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1c3512ca-ae09-4231-920d-7b0694041097","Type":"ContainerDied","Data":"0862f9f49ed7c9c42ddbfe2b9705819092f285174ad06a4b3a2acf308bb6b815"} Jan 31 04:06:46 crc kubenswrapper[4667]: I0131 04:06:46.853747 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1c3512ca-ae09-4231-920d-7b0694041097","Type":"ContainerDied","Data":"94750ae3e2a7775053bc506dba7c793f1f9b54228fe142b1885b84d78ccfe343"} Jan 31 04:06:47 crc kubenswrapper[4667]: I0131 04:06:47.869293 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-s2th9" event={"ID":"6f4b88f0-e927-43e5-941e-1c431fb7269c","Type":"ContainerDied","Data":"08193dbd4c1e145245c6f8205aa1840ad42db04eb19e0fcf4fd16878230e64b9"} Jan 31 04:06:47 crc kubenswrapper[4667]: I0131 04:06:47.869243 4667 generic.go:334] "Generic (PLEG): container finished" podID="6f4b88f0-e927-43e5-941e-1c431fb7269c" containerID="08193dbd4c1e145245c6f8205aa1840ad42db04eb19e0fcf4fd16878230e64b9" exitCode=0 Jan 31 04:06:50 crc kubenswrapper[4667]: I0131 04:06:50.844640 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-gzgp6" Jan 31 04:06:50 crc kubenswrapper[4667]: I0131 04:06:50.930341 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-9n7xp"] Jan 31 04:06:50 crc kubenswrapper[4667]: I0131 04:06:50.930599 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-9n7xp" podUID="ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1" containerName="dnsmasq-dns" containerID="cri-o://6f17dcd2e4af8382cd7b790476bde52a1a9da0e2ad82b484738e0fc7f69ea4b3" gracePeriod=10 Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.257314 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-559945cfdf-qwr2k"] Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.375466 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-78789d8f44-5trmc"] Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.378730 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78789d8f44-5trmc" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.394925 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.416020 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-78789d8f44-5trmc"] Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.454396 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-85bb8b576c-j4h8q"] Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.505359 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-86c748c4d6-2grmh"] Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.508404 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-86c748c4d6-2grmh" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.509708 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fltqc\" (UniqueName: \"kubernetes.io/projected/b7f8fd18-06a0-432e-8c17-c9b432b6ca69-kube-api-access-fltqc\") pod \"horizon-78789d8f44-5trmc\" (UID: \"b7f8fd18-06a0-432e-8c17-c9b432b6ca69\") " pod="openstack/horizon-78789d8f44-5trmc" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.510705 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7f8fd18-06a0-432e-8c17-c9b432b6ca69-combined-ca-bundle\") pod \"horizon-78789d8f44-5trmc\" (UID: \"b7f8fd18-06a0-432e-8c17-c9b432b6ca69\") " pod="openstack/horizon-78789d8f44-5trmc" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.510798 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b7f8fd18-06a0-432e-8c17-c9b432b6ca69-scripts\") pod \"horizon-78789d8f44-5trmc\" (UID: \"b7f8fd18-06a0-432e-8c17-c9b432b6ca69\") " pod="openstack/horizon-78789d8f44-5trmc" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.510932 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b7f8fd18-06a0-432e-8c17-c9b432b6ca69-config-data\") pod \"horizon-78789d8f44-5trmc\" (UID: \"b7f8fd18-06a0-432e-8c17-c9b432b6ca69\") " pod="openstack/horizon-78789d8f44-5trmc" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.511021 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7f8fd18-06a0-432e-8c17-c9b432b6ca69-horizon-tls-certs\") pod \"horizon-78789d8f44-5trmc\" (UID: \"b7f8fd18-06a0-432e-8c17-c9b432b6ca69\") " pod="openstack/horizon-78789d8f44-5trmc" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.511309 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b7f8fd18-06a0-432e-8c17-c9b432b6ca69-horizon-secret-key\") pod \"horizon-78789d8f44-5trmc\" (UID: \"b7f8fd18-06a0-432e-8c17-c9b432b6ca69\") " pod="openstack/horizon-78789d8f44-5trmc" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.511478 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7f8fd18-06a0-432e-8c17-c9b432b6ca69-logs\") pod \"horizon-78789d8f44-5trmc\" (UID: \"b7f8fd18-06a0-432e-8c17-c9b432b6ca69\") " pod="openstack/horizon-78789d8f44-5trmc" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.548197 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-86c748c4d6-2grmh"] Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.613686 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b7f8fd18-06a0-432e-8c17-c9b432b6ca69-config-data\") pod \"horizon-78789d8f44-5trmc\" (UID: \"b7f8fd18-06a0-432e-8c17-c9b432b6ca69\") " pod="openstack/horizon-78789d8f44-5trmc" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.613742 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7f8fd18-06a0-432e-8c17-c9b432b6ca69-horizon-tls-certs\") pod \"horizon-78789d8f44-5trmc\" (UID: \"b7f8fd18-06a0-432e-8c17-c9b432b6ca69\") " pod="openstack/horizon-78789d8f44-5trmc" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.613786 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6974567-3bea-447a-bb8b-ced22b6d34ce-logs\") pod \"horizon-86c748c4d6-2grmh\" (UID: \"c6974567-3bea-447a-bb8b-ced22b6d34ce\") " pod="openstack/horizon-86c748c4d6-2grmh" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.613838 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b7f8fd18-06a0-432e-8c17-c9b432b6ca69-horizon-secret-key\") pod \"horizon-78789d8f44-5trmc\" (UID: \"b7f8fd18-06a0-432e-8c17-c9b432b6ca69\") " pod="openstack/horizon-78789d8f44-5trmc" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.614263 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c6974567-3bea-447a-bb8b-ced22b6d34ce-scripts\") pod \"horizon-86c748c4d6-2grmh\" (UID: \"c6974567-3bea-447a-bb8b-ced22b6d34ce\") " pod="openstack/horizon-86c748c4d6-2grmh" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.614291 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6974567-3bea-447a-bb8b-ced22b6d34ce-horizon-tls-certs\") pod \"horizon-86c748c4d6-2grmh\" (UID: \"c6974567-3bea-447a-bb8b-ced22b6d34ce\") " pod="openstack/horizon-86c748c4d6-2grmh" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.614320 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7f8fd18-06a0-432e-8c17-c9b432b6ca69-logs\") pod \"horizon-78789d8f44-5trmc\" (UID: \"b7f8fd18-06a0-432e-8c17-c9b432b6ca69\") " pod="openstack/horizon-78789d8f44-5trmc" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.614347 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c6974567-3bea-447a-bb8b-ced22b6d34ce-config-data\") pod \"horizon-86c748c4d6-2grmh\" (UID: \"c6974567-3bea-447a-bb8b-ced22b6d34ce\") " pod="openstack/horizon-86c748c4d6-2grmh" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.614372 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lssrh\" (UniqueName: \"kubernetes.io/projected/c6974567-3bea-447a-bb8b-ced22b6d34ce-kube-api-access-lssrh\") pod \"horizon-86c748c4d6-2grmh\" (UID: \"c6974567-3bea-447a-bb8b-ced22b6d34ce\") " pod="openstack/horizon-86c748c4d6-2grmh" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.614389 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6974567-3bea-447a-bb8b-ced22b6d34ce-combined-ca-bundle\") pod \"horizon-86c748c4d6-2grmh\" (UID: \"c6974567-3bea-447a-bb8b-ced22b6d34ce\") " pod="openstack/horizon-86c748c4d6-2grmh" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.614415 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c6974567-3bea-447a-bb8b-ced22b6d34ce-horizon-secret-key\") pod \"horizon-86c748c4d6-2grmh\" (UID: \"c6974567-3bea-447a-bb8b-ced22b6d34ce\") " pod="openstack/horizon-86c748c4d6-2grmh" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.614434 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fltqc\" (UniqueName: \"kubernetes.io/projected/b7f8fd18-06a0-432e-8c17-c9b432b6ca69-kube-api-access-fltqc\") pod \"horizon-78789d8f44-5trmc\" (UID: \"b7f8fd18-06a0-432e-8c17-c9b432b6ca69\") " pod="openstack/horizon-78789d8f44-5trmc" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.614463 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7f8fd18-06a0-432e-8c17-c9b432b6ca69-combined-ca-bundle\") pod \"horizon-78789d8f44-5trmc\" (UID: \"b7f8fd18-06a0-432e-8c17-c9b432b6ca69\") " pod="openstack/horizon-78789d8f44-5trmc" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.614482 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b7f8fd18-06a0-432e-8c17-c9b432b6ca69-scripts\") pod \"horizon-78789d8f44-5trmc\" (UID: \"b7f8fd18-06a0-432e-8c17-c9b432b6ca69\") " pod="openstack/horizon-78789d8f44-5trmc" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.615280 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b7f8fd18-06a0-432e-8c17-c9b432b6ca69-config-data\") pod \"horizon-78789d8f44-5trmc\" (UID: \"b7f8fd18-06a0-432e-8c17-c9b432b6ca69\") " pod="openstack/horizon-78789d8f44-5trmc" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.615527 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7f8fd18-06a0-432e-8c17-c9b432b6ca69-logs\") pod \"horizon-78789d8f44-5trmc\" (UID: \"b7f8fd18-06a0-432e-8c17-c9b432b6ca69\") " pod="openstack/horizon-78789d8f44-5trmc" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.615689 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b7f8fd18-06a0-432e-8c17-c9b432b6ca69-scripts\") pod \"horizon-78789d8f44-5trmc\" (UID: \"b7f8fd18-06a0-432e-8c17-c9b432b6ca69\") " pod="openstack/horizon-78789d8f44-5trmc" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.625194 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7f8fd18-06a0-432e-8c17-c9b432b6ca69-combined-ca-bundle\") pod \"horizon-78789d8f44-5trmc\" (UID: \"b7f8fd18-06a0-432e-8c17-c9b432b6ca69\") " pod="openstack/horizon-78789d8f44-5trmc" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.628397 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b7f8fd18-06a0-432e-8c17-c9b432b6ca69-horizon-secret-key\") pod \"horizon-78789d8f44-5trmc\" (UID: \"b7f8fd18-06a0-432e-8c17-c9b432b6ca69\") " pod="openstack/horizon-78789d8f44-5trmc" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.635519 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fltqc\" (UniqueName: \"kubernetes.io/projected/b7f8fd18-06a0-432e-8c17-c9b432b6ca69-kube-api-access-fltqc\") pod \"horizon-78789d8f44-5trmc\" (UID: \"b7f8fd18-06a0-432e-8c17-c9b432b6ca69\") " pod="openstack/horizon-78789d8f44-5trmc" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.669454 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7f8fd18-06a0-432e-8c17-c9b432b6ca69-horizon-tls-certs\") pod \"horizon-78789d8f44-5trmc\" (UID: \"b7f8fd18-06a0-432e-8c17-c9b432b6ca69\") " pod="openstack/horizon-78789d8f44-5trmc" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.716264 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c6974567-3bea-447a-bb8b-ced22b6d34ce-scripts\") pod \"horizon-86c748c4d6-2grmh\" (UID: \"c6974567-3bea-447a-bb8b-ced22b6d34ce\") " pod="openstack/horizon-86c748c4d6-2grmh" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.716326 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6974567-3bea-447a-bb8b-ced22b6d34ce-horizon-tls-certs\") pod \"horizon-86c748c4d6-2grmh\" (UID: \"c6974567-3bea-447a-bb8b-ced22b6d34ce\") " pod="openstack/horizon-86c748c4d6-2grmh" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.716374 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c6974567-3bea-447a-bb8b-ced22b6d34ce-config-data\") pod \"horizon-86c748c4d6-2grmh\" (UID: \"c6974567-3bea-447a-bb8b-ced22b6d34ce\") " pod="openstack/horizon-86c748c4d6-2grmh" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.716412 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lssrh\" (UniqueName: \"kubernetes.io/projected/c6974567-3bea-447a-bb8b-ced22b6d34ce-kube-api-access-lssrh\") pod \"horizon-86c748c4d6-2grmh\" (UID: \"c6974567-3bea-447a-bb8b-ced22b6d34ce\") " pod="openstack/horizon-86c748c4d6-2grmh" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.716448 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6974567-3bea-447a-bb8b-ced22b6d34ce-combined-ca-bundle\") pod \"horizon-86c748c4d6-2grmh\" (UID: \"c6974567-3bea-447a-bb8b-ced22b6d34ce\") " pod="openstack/horizon-86c748c4d6-2grmh" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.716483 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c6974567-3bea-447a-bb8b-ced22b6d34ce-horizon-secret-key\") pod \"horizon-86c748c4d6-2grmh\" (UID: \"c6974567-3bea-447a-bb8b-ced22b6d34ce\") " pod="openstack/horizon-86c748c4d6-2grmh" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.716582 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6974567-3bea-447a-bb8b-ced22b6d34ce-logs\") pod \"horizon-86c748c4d6-2grmh\" (UID: \"c6974567-3bea-447a-bb8b-ced22b6d34ce\") " pod="openstack/horizon-86c748c4d6-2grmh" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.717036 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6974567-3bea-447a-bb8b-ced22b6d34ce-logs\") pod \"horizon-86c748c4d6-2grmh\" (UID: \"c6974567-3bea-447a-bb8b-ced22b6d34ce\") " pod="openstack/horizon-86c748c4d6-2grmh" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.717923 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c6974567-3bea-447a-bb8b-ced22b6d34ce-config-data\") pod \"horizon-86c748c4d6-2grmh\" (UID: \"c6974567-3bea-447a-bb8b-ced22b6d34ce\") " pod="openstack/horizon-86c748c4d6-2grmh" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.718345 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c6974567-3bea-447a-bb8b-ced22b6d34ce-scripts\") pod \"horizon-86c748c4d6-2grmh\" (UID: \"c6974567-3bea-447a-bb8b-ced22b6d34ce\") " pod="openstack/horizon-86c748c4d6-2grmh" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.724200 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c6974567-3bea-447a-bb8b-ced22b6d34ce-horizon-secret-key\") pod \"horizon-86c748c4d6-2grmh\" (UID: \"c6974567-3bea-447a-bb8b-ced22b6d34ce\") " pod="openstack/horizon-86c748c4d6-2grmh" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.725482 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6974567-3bea-447a-bb8b-ced22b6d34ce-combined-ca-bundle\") pod \"horizon-86c748c4d6-2grmh\" (UID: \"c6974567-3bea-447a-bb8b-ced22b6d34ce\") " pod="openstack/horizon-86c748c4d6-2grmh" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.729517 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6974567-3bea-447a-bb8b-ced22b6d34ce-horizon-tls-certs\") pod \"horizon-86c748c4d6-2grmh\" (UID: \"c6974567-3bea-447a-bb8b-ced22b6d34ce\") " pod="openstack/horizon-86c748c4d6-2grmh" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.733533 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lssrh\" (UniqueName: \"kubernetes.io/projected/c6974567-3bea-447a-bb8b-ced22b6d34ce-kube-api-access-lssrh\") pod \"horizon-86c748c4d6-2grmh\" (UID: \"c6974567-3bea-447a-bb8b-ced22b6d34ce\") " pod="openstack/horizon-86c748c4d6-2grmh" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.753634 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78789d8f44-5trmc" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.842765 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-86c748c4d6-2grmh" Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.953527 4667 generic.go:334] "Generic (PLEG): container finished" podID="ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1" containerID="6f17dcd2e4af8382cd7b790476bde52a1a9da0e2ad82b484738e0fc7f69ea4b3" exitCode=0 Jan 31 04:06:51 crc kubenswrapper[4667]: I0131 04:06:51.953581 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-9n7xp" event={"ID":"ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1","Type":"ContainerDied","Data":"6f17dcd2e4af8382cd7b790476bde52a1a9da0e2ad82b484738e0fc7f69ea4b3"} Jan 31 04:06:54 crc kubenswrapper[4667]: I0131 04:06:54.088119 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-9n7xp" podUID="ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: connect: connection refused" Jan 31 04:07:00 crc kubenswrapper[4667]: E0131 04:07:00.784219 4667 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 31 04:07:00 crc kubenswrapper[4667]: E0131 04:07:00.786888 4667 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n699hbbh58h84h54fhd4h64fhc8hf9h5b7h678h67fhb5h5b4h84h9h75h64fh689h595h594hc8hffhb6h685h65h5bch8fh5d7h5dbh68h5f9q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8wm8j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-559945cfdf-qwr2k_openstack(f1636f31-13c4-4745-88a2-20ff71e46358): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 04:07:00 crc kubenswrapper[4667]: E0131 04:07:00.789140 4667 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 31 04:07:00 crc kubenswrapper[4667]: E0131 04:07:00.789334 4667 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5d9hc8h54h58dh58fhd5h58dh5bfh86h568h697h5f6h679h5d4h76h595hd7h599hd9h55ch5fbh684h5c4h5h684h569h676h5c6h58bh655h56ch578q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vkxc2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-85bb8b576c-j4h8q_openstack(ec50b0b7-497c-4bd1-a031-007dbd616e3c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 04:07:00 crc kubenswrapper[4667]: E0131 04:07:00.789473 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-559945cfdf-qwr2k" podUID="f1636f31-13c4-4745-88a2-20ff71e46358" Jan 31 04:07:00 crc kubenswrapper[4667]: E0131 04:07:00.793681 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-85bb8b576c-j4h8q" podUID="ec50b0b7-497c-4bd1-a031-007dbd616e3c" Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.200726 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.207281 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.214598 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-s2th9" Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.284260 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-797fb\" (UniqueName: \"kubernetes.io/projected/5ed264cf-9f83-47d5-8291-95ee4838616f-kube-api-access-797fb\") pod \"5ed264cf-9f83-47d5-8291-95ee4838616f\" (UID: \"5ed264cf-9f83-47d5-8291-95ee4838616f\") " Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.284341 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c3512ca-ae09-4231-920d-7b0694041097-combined-ca-bundle\") pod \"1c3512ca-ae09-4231-920d-7b0694041097\" (UID: \"1c3512ca-ae09-4231-920d-7b0694041097\") " Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.284371 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ed264cf-9f83-47d5-8291-95ee4838616f-scripts\") pod \"5ed264cf-9f83-47d5-8291-95ee4838616f\" (UID: \"5ed264cf-9f83-47d5-8291-95ee4838616f\") " Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.284461 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f4b88f0-e927-43e5-941e-1c431fb7269c-scripts\") pod \"6f4b88f0-e927-43e5-941e-1c431fb7269c\" (UID: \"6f4b88f0-e927-43e5-941e-1c431fb7269c\") " Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.284532 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ed264cf-9f83-47d5-8291-95ee4838616f-combined-ca-bundle\") pod \"5ed264cf-9f83-47d5-8291-95ee4838616f\" (UID: \"5ed264cf-9f83-47d5-8291-95ee4838616f\") " Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.284615 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6f4b88f0-e927-43e5-941e-1c431fb7269c-credential-keys\") pod \"6f4b88f0-e927-43e5-941e-1c431fb7269c\" (UID: \"6f4b88f0-e927-43e5-941e-1c431fb7269c\") " Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.284651 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lb6c\" (UniqueName: \"kubernetes.io/projected/6f4b88f0-e927-43e5-941e-1c431fb7269c-kube-api-access-9lb6c\") pod \"6f4b88f0-e927-43e5-941e-1c431fb7269c\" (UID: \"6f4b88f0-e927-43e5-941e-1c431fb7269c\") " Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.284696 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6f4b88f0-e927-43e5-941e-1c431fb7269c-fernet-keys\") pod \"6f4b88f0-e927-43e5-941e-1c431fb7269c\" (UID: \"6f4b88f0-e927-43e5-941e-1c431fb7269c\") " Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.284732 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c3512ca-ae09-4231-920d-7b0694041097-scripts\") pod \"1c3512ca-ae09-4231-920d-7b0694041097\" (UID: \"1c3512ca-ae09-4231-920d-7b0694041097\") " Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.284756 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ed264cf-9f83-47d5-8291-95ee4838616f-logs\") pod \"5ed264cf-9f83-47d5-8291-95ee4838616f\" (UID: \"5ed264cf-9f83-47d5-8291-95ee4838616f\") " Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.284793 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c3512ca-ae09-4231-920d-7b0694041097-logs\") pod \"1c3512ca-ae09-4231-920d-7b0694041097\" (UID: \"1c3512ca-ae09-4231-920d-7b0694041097\") " Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.284856 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"1c3512ca-ae09-4231-920d-7b0694041097\" (UID: \"1c3512ca-ae09-4231-920d-7b0694041097\") " Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.284894 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c3512ca-ae09-4231-920d-7b0694041097-httpd-run\") pod \"1c3512ca-ae09-4231-920d-7b0694041097\" (UID: \"1c3512ca-ae09-4231-920d-7b0694041097\") " Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.284937 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c3512ca-ae09-4231-920d-7b0694041097-config-data\") pod \"1c3512ca-ae09-4231-920d-7b0694041097\" (UID: \"1c3512ca-ae09-4231-920d-7b0694041097\") " Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.284958 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nw9fx\" (UniqueName: \"kubernetes.io/projected/1c3512ca-ae09-4231-920d-7b0694041097-kube-api-access-nw9fx\") pod \"1c3512ca-ae09-4231-920d-7b0694041097\" (UID: \"1c3512ca-ae09-4231-920d-7b0694041097\") " Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.285029 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f4b88f0-e927-43e5-941e-1c431fb7269c-combined-ca-bundle\") pod \"6f4b88f0-e927-43e5-941e-1c431fb7269c\" (UID: \"6f4b88f0-e927-43e5-941e-1c431fb7269c\") " Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.285067 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ed264cf-9f83-47d5-8291-95ee4838616f-config-data\") pod \"5ed264cf-9f83-47d5-8291-95ee4838616f\" (UID: \"5ed264cf-9f83-47d5-8291-95ee4838616f\") " Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.285111 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"5ed264cf-9f83-47d5-8291-95ee4838616f\" (UID: \"5ed264cf-9f83-47d5-8291-95ee4838616f\") " Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.285201 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f4b88f0-e927-43e5-941e-1c431fb7269c-config-data\") pod \"6f4b88f0-e927-43e5-941e-1c431fb7269c\" (UID: \"6f4b88f0-e927-43e5-941e-1c431fb7269c\") " Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.285245 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5ed264cf-9f83-47d5-8291-95ee4838616f-httpd-run\") pod \"5ed264cf-9f83-47d5-8291-95ee4838616f\" (UID: \"5ed264cf-9f83-47d5-8291-95ee4838616f\") " Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.286198 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ed264cf-9f83-47d5-8291-95ee4838616f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5ed264cf-9f83-47d5-8291-95ee4838616f" (UID: "5ed264cf-9f83-47d5-8291-95ee4838616f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.288976 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c3512ca-ae09-4231-920d-7b0694041097-logs" (OuterVolumeSpecName: "logs") pod "1c3512ca-ae09-4231-920d-7b0694041097" (UID: "1c3512ca-ae09-4231-920d-7b0694041097"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.293238 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c3512ca-ae09-4231-920d-7b0694041097-kube-api-access-nw9fx" (OuterVolumeSpecName: "kube-api-access-nw9fx") pod "1c3512ca-ae09-4231-920d-7b0694041097" (UID: "1c3512ca-ae09-4231-920d-7b0694041097"). InnerVolumeSpecName "kube-api-access-nw9fx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.296322 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ed264cf-9f83-47d5-8291-95ee4838616f-kube-api-access-797fb" (OuterVolumeSpecName: "kube-api-access-797fb") pod "5ed264cf-9f83-47d5-8291-95ee4838616f" (UID: "5ed264cf-9f83-47d5-8291-95ee4838616f"). InnerVolumeSpecName "kube-api-access-797fb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.297426 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "1c3512ca-ae09-4231-920d-7b0694041097" (UID: "1c3512ca-ae09-4231-920d-7b0694041097"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.297816 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c3512ca-ae09-4231-920d-7b0694041097-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1c3512ca-ae09-4231-920d-7b0694041097" (UID: "1c3512ca-ae09-4231-920d-7b0694041097"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.300560 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f4b88f0-e927-43e5-941e-1c431fb7269c-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "6f4b88f0-e927-43e5-941e-1c431fb7269c" (UID: "6f4b88f0-e927-43e5-941e-1c431fb7269c"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.302373 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c3512ca-ae09-4231-920d-7b0694041097-scripts" (OuterVolumeSpecName: "scripts") pod "1c3512ca-ae09-4231-920d-7b0694041097" (UID: "1c3512ca-ae09-4231-920d-7b0694041097"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.302431 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f4b88f0-e927-43e5-941e-1c431fb7269c-kube-api-access-9lb6c" (OuterVolumeSpecName: "kube-api-access-9lb6c") pod "6f4b88f0-e927-43e5-941e-1c431fb7269c" (UID: "6f4b88f0-e927-43e5-941e-1c431fb7269c"). InnerVolumeSpecName "kube-api-access-9lb6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.310423 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ed264cf-9f83-47d5-8291-95ee4838616f-scripts" (OuterVolumeSpecName: "scripts") pod "5ed264cf-9f83-47d5-8291-95ee4838616f" (UID: "5ed264cf-9f83-47d5-8291-95ee4838616f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.310423 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f4b88f0-e927-43e5-941e-1c431fb7269c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "6f4b88f0-e927-43e5-941e-1c431fb7269c" (UID: "6f4b88f0-e927-43e5-941e-1c431fb7269c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.323757 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "5ed264cf-9f83-47d5-8291-95ee4838616f" (UID: "5ed264cf-9f83-47d5-8291-95ee4838616f"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.327028 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ed264cf-9f83-47d5-8291-95ee4838616f-logs" (OuterVolumeSpecName: "logs") pod "5ed264cf-9f83-47d5-8291-95ee4838616f" (UID: "5ed264cf-9f83-47d5-8291-95ee4838616f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.338187 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ed264cf-9f83-47d5-8291-95ee4838616f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5ed264cf-9f83-47d5-8291-95ee4838616f" (UID: "5ed264cf-9f83-47d5-8291-95ee4838616f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.354999 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f4b88f0-e927-43e5-941e-1c431fb7269c-config-data" (OuterVolumeSpecName: "config-data") pod "6f4b88f0-e927-43e5-941e-1c431fb7269c" (UID: "6f4b88f0-e927-43e5-941e-1c431fb7269c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.366943 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f4b88f0-e927-43e5-941e-1c431fb7269c-scripts" (OuterVolumeSpecName: "scripts") pod "6f4b88f0-e927-43e5-941e-1c431fb7269c" (UID: "6f4b88f0-e927-43e5-941e-1c431fb7269c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.377109 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c3512ca-ae09-4231-920d-7b0694041097-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c3512ca-ae09-4231-920d-7b0694041097" (UID: "1c3512ca-ae09-4231-920d-7b0694041097"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.380505 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ed264cf-9f83-47d5-8291-95ee4838616f-config-data" (OuterVolumeSpecName: "config-data") pod "5ed264cf-9f83-47d5-8291-95ee4838616f" (UID: "5ed264cf-9f83-47d5-8291-95ee4838616f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.383384 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f4b88f0-e927-43e5-941e-1c431fb7269c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f4b88f0-e927-43e5-941e-1c431fb7269c" (UID: "6f4b88f0-e927-43e5-941e-1c431fb7269c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.388068 4667 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ed264cf-9f83-47d5-8291-95ee4838616f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.388097 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lb6c\" (UniqueName: \"kubernetes.io/projected/6f4b88f0-e927-43e5-941e-1c431fb7269c-kube-api-access-9lb6c\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.388111 4667 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6f4b88f0-e927-43e5-941e-1c431fb7269c-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.388123 4667 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6f4b88f0-e927-43e5-941e-1c431fb7269c-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.388132 4667 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c3512ca-ae09-4231-920d-7b0694041097-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.388149 4667 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ed264cf-9f83-47d5-8291-95ee4838616f-logs\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.388157 4667 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c3512ca-ae09-4231-920d-7b0694041097-logs\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.388187 4667 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.388197 4667 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c3512ca-ae09-4231-920d-7b0694041097-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.388206 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nw9fx\" (UniqueName: \"kubernetes.io/projected/1c3512ca-ae09-4231-920d-7b0694041097-kube-api-access-nw9fx\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.388216 4667 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f4b88f0-e927-43e5-941e-1c431fb7269c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.388234 4667 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.388245 4667 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ed264cf-9f83-47d5-8291-95ee4838616f-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.388254 4667 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f4b88f0-e927-43e5-941e-1c431fb7269c-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.388262 4667 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5ed264cf-9f83-47d5-8291-95ee4838616f-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.388273 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-797fb\" (UniqueName: \"kubernetes.io/projected/5ed264cf-9f83-47d5-8291-95ee4838616f-kube-api-access-797fb\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.388281 4667 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c3512ca-ae09-4231-920d-7b0694041097-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.388289 4667 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ed264cf-9f83-47d5-8291-95ee4838616f-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.388298 4667 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f4b88f0-e927-43e5-941e-1c431fb7269c-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.388705 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c3512ca-ae09-4231-920d-7b0694041097-config-data" (OuterVolumeSpecName: "config-data") pod "1c3512ca-ae09-4231-920d-7b0694041097" (UID: "1c3512ca-ae09-4231-920d-7b0694041097"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.411801 4667 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.412006 4667 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.489614 4667 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.489663 4667 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c3512ca-ae09-4231-920d-7b0694041097-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:03 crc kubenswrapper[4667]: I0131 04:07:03.489680 4667 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.074195 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.074203 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1c3512ca-ae09-4231-920d-7b0694041097","Type":"ContainerDied","Data":"4ec50165be40949f73f11ad3b6b144b53a953cbad0d48cc1bc12719cfe53c876"} Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.074345 4667 scope.go:117] "RemoveContainer" containerID="0862f9f49ed7c9c42ddbfe2b9705819092f285174ad06a4b3a2acf308bb6b815" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.076639 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-s2th9" event={"ID":"6f4b88f0-e927-43e5-941e-1c431fb7269c","Type":"ContainerDied","Data":"851144a7c390d03824d9374acb517e9649a5020b4c625ab784a67d3a76e46df1"} Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.076697 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="851144a7c390d03824d9374acb517e9649a5020b4c625ab784a67d3a76e46df1" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.076696 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-s2th9" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.086762 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-9n7xp" podUID="ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: i/o timeout" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.089256 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5ed264cf-9f83-47d5-8291-95ee4838616f","Type":"ContainerDied","Data":"2ba19547fbd37d702bb18b0b8276e0b6780283f1f88d24bcd2db4a438285ff35"} Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.089527 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.135744 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.162583 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.175916 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 04:07:04 crc kubenswrapper[4667]: E0131 04:07:04.176361 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c3512ca-ae09-4231-920d-7b0694041097" containerName="glance-httpd" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.176394 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c3512ca-ae09-4231-920d-7b0694041097" containerName="glance-httpd" Jan 31 04:07:04 crc kubenswrapper[4667]: E0131 04:07:04.176404 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f4b88f0-e927-43e5-941e-1c431fb7269c" containerName="keystone-bootstrap" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.176411 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f4b88f0-e927-43e5-941e-1c431fb7269c" containerName="keystone-bootstrap" Jan 31 04:07:04 crc kubenswrapper[4667]: E0131 04:07:04.176424 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ed264cf-9f83-47d5-8291-95ee4838616f" containerName="glance-log" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.176431 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ed264cf-9f83-47d5-8291-95ee4838616f" containerName="glance-log" Jan 31 04:07:04 crc kubenswrapper[4667]: E0131 04:07:04.176457 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c3512ca-ae09-4231-920d-7b0694041097" containerName="glance-log" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.176466 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c3512ca-ae09-4231-920d-7b0694041097" containerName="glance-log" Jan 31 04:07:04 crc kubenswrapper[4667]: E0131 04:07:04.176476 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ed264cf-9f83-47d5-8291-95ee4838616f" containerName="glance-httpd" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.176482 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ed264cf-9f83-47d5-8291-95ee4838616f" containerName="glance-httpd" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.176661 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ed264cf-9f83-47d5-8291-95ee4838616f" containerName="glance-httpd" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.176670 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f4b88f0-e927-43e5-941e-1c431fb7269c" containerName="keystone-bootstrap" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.176686 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ed264cf-9f83-47d5-8291-95ee4838616f" containerName="glance-log" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.176697 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c3512ca-ae09-4231-920d-7b0694041097" containerName="glance-log" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.176710 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c3512ca-ae09-4231-920d-7b0694041097" containerName="glance-httpd" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.177876 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.185568 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-8pgdz" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.185853 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.185969 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.185997 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.192366 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.216605 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75c7336f-29b1-4a8a-88c1-69eec14a92b7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"75c7336f-29b1-4a8a-88c1-69eec14a92b7\") " pod="openstack/glance-default-external-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.216656 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75c7336f-29b1-4a8a-88c1-69eec14a92b7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"75c7336f-29b1-4a8a-88c1-69eec14a92b7\") " pod="openstack/glance-default-external-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.216684 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"75c7336f-29b1-4a8a-88c1-69eec14a92b7\") " pod="openstack/glance-default-external-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.216709 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75c7336f-29b1-4a8a-88c1-69eec14a92b7-logs\") pod \"glance-default-external-api-0\" (UID: \"75c7336f-29b1-4a8a-88c1-69eec14a92b7\") " pod="openstack/glance-default-external-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.216751 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x79h\" (UniqueName: \"kubernetes.io/projected/75c7336f-29b1-4a8a-88c1-69eec14a92b7-kube-api-access-9x79h\") pod \"glance-default-external-api-0\" (UID: \"75c7336f-29b1-4a8a-88c1-69eec14a92b7\") " pod="openstack/glance-default-external-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.216769 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75c7336f-29b1-4a8a-88c1-69eec14a92b7-scripts\") pod \"glance-default-external-api-0\" (UID: \"75c7336f-29b1-4a8a-88c1-69eec14a92b7\") " pod="openstack/glance-default-external-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.216810 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75c7336f-29b1-4a8a-88c1-69eec14a92b7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"75c7336f-29b1-4a8a-88c1-69eec14a92b7\") " pod="openstack/glance-default-external-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.216856 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75c7336f-29b1-4a8a-88c1-69eec14a92b7-config-data\") pod \"glance-default-external-api-0\" (UID: \"75c7336f-29b1-4a8a-88c1-69eec14a92b7\") " pod="openstack/glance-default-external-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.217242 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.276984 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.308657 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.311203 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.316212 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.318451 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75c7336f-29b1-4a8a-88c1-69eec14a92b7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"75c7336f-29b1-4a8a-88c1-69eec14a92b7\") " pod="openstack/glance-default-external-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.318522 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"75c7336f-29b1-4a8a-88c1-69eec14a92b7\") " pod="openstack/glance-default-external-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.318559 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75c7336f-29b1-4a8a-88c1-69eec14a92b7-logs\") pod \"glance-default-external-api-0\" (UID: \"75c7336f-29b1-4a8a-88c1-69eec14a92b7\") " pod="openstack/glance-default-external-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.318620 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x79h\" (UniqueName: \"kubernetes.io/projected/75c7336f-29b1-4a8a-88c1-69eec14a92b7-kube-api-access-9x79h\") pod \"glance-default-external-api-0\" (UID: \"75c7336f-29b1-4a8a-88c1-69eec14a92b7\") " pod="openstack/glance-default-external-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.318641 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75c7336f-29b1-4a8a-88c1-69eec14a92b7-scripts\") pod \"glance-default-external-api-0\" (UID: \"75c7336f-29b1-4a8a-88c1-69eec14a92b7\") " pod="openstack/glance-default-external-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.318706 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75c7336f-29b1-4a8a-88c1-69eec14a92b7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"75c7336f-29b1-4a8a-88c1-69eec14a92b7\") " pod="openstack/glance-default-external-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.318762 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75c7336f-29b1-4a8a-88c1-69eec14a92b7-config-data\") pod \"glance-default-external-api-0\" (UID: \"75c7336f-29b1-4a8a-88c1-69eec14a92b7\") " pod="openstack/glance-default-external-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.318833 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75c7336f-29b1-4a8a-88c1-69eec14a92b7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"75c7336f-29b1-4a8a-88c1-69eec14a92b7\") " pod="openstack/glance-default-external-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.321300 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.322009 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75c7336f-29b1-4a8a-88c1-69eec14a92b7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"75c7336f-29b1-4a8a-88c1-69eec14a92b7\") " pod="openstack/glance-default-external-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.322142 4667 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"75c7336f-29b1-4a8a-88c1-69eec14a92b7\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.325482 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75c7336f-29b1-4a8a-88c1-69eec14a92b7-logs\") pod \"glance-default-external-api-0\" (UID: \"75c7336f-29b1-4a8a-88c1-69eec14a92b7\") " pod="openstack/glance-default-external-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.338813 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75c7336f-29b1-4a8a-88c1-69eec14a92b7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"75c7336f-29b1-4a8a-88c1-69eec14a92b7\") " pod="openstack/glance-default-external-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.351996 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x79h\" (UniqueName: \"kubernetes.io/projected/75c7336f-29b1-4a8a-88c1-69eec14a92b7-kube-api-access-9x79h\") pod \"glance-default-external-api-0\" (UID: \"75c7336f-29b1-4a8a-88c1-69eec14a92b7\") " pod="openstack/glance-default-external-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.352033 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.356397 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"75c7336f-29b1-4a8a-88c1-69eec14a92b7\") " pod="openstack/glance-default-external-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.357542 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75c7336f-29b1-4a8a-88c1-69eec14a92b7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"75c7336f-29b1-4a8a-88c1-69eec14a92b7\") " pod="openstack/glance-default-external-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.374529 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75c7336f-29b1-4a8a-88c1-69eec14a92b7-scripts\") pod \"glance-default-external-api-0\" (UID: \"75c7336f-29b1-4a8a-88c1-69eec14a92b7\") " pod="openstack/glance-default-external-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.378496 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75c7336f-29b1-4a8a-88c1-69eec14a92b7-config-data\") pod \"glance-default-external-api-0\" (UID: \"75c7336f-29b1-4a8a-88c1-69eec14a92b7\") " pod="openstack/glance-default-external-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.421689 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"4128ea2d-f529-4224-a008-560c8920dc8f\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.421818 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4128ea2d-f529-4224-a008-560c8920dc8f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4128ea2d-f529-4224-a008-560c8920dc8f\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.427630 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4128ea2d-f529-4224-a008-560c8920dc8f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4128ea2d-f529-4224-a008-560c8920dc8f\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.427714 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4128ea2d-f529-4224-a008-560c8920dc8f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4128ea2d-f529-4224-a008-560c8920dc8f\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.427787 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4128ea2d-f529-4224-a008-560c8920dc8f-logs\") pod \"glance-default-internal-api-0\" (UID: \"4128ea2d-f529-4224-a008-560c8920dc8f\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.427867 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4128ea2d-f529-4224-a008-560c8920dc8f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4128ea2d-f529-4224-a008-560c8920dc8f\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.428067 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x6jm\" (UniqueName: \"kubernetes.io/projected/4128ea2d-f529-4224-a008-560c8920dc8f-kube-api-access-5x6jm\") pod \"glance-default-internal-api-0\" (UID: \"4128ea2d-f529-4224-a008-560c8920dc8f\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.428279 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4128ea2d-f529-4224-a008-560c8920dc8f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4128ea2d-f529-4224-a008-560c8920dc8f\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.471119 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-s2th9"] Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.484310 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-s2th9"] Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.504710 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.531350 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4128ea2d-f529-4224-a008-560c8920dc8f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4128ea2d-f529-4224-a008-560c8920dc8f\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.531448 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x6jm\" (UniqueName: \"kubernetes.io/projected/4128ea2d-f529-4224-a008-560c8920dc8f-kube-api-access-5x6jm\") pod \"glance-default-internal-api-0\" (UID: \"4128ea2d-f529-4224-a008-560c8920dc8f\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.531527 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4128ea2d-f529-4224-a008-560c8920dc8f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4128ea2d-f529-4224-a008-560c8920dc8f\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.531598 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"4128ea2d-f529-4224-a008-560c8920dc8f\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.531638 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4128ea2d-f529-4224-a008-560c8920dc8f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4128ea2d-f529-4224-a008-560c8920dc8f\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.531693 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4128ea2d-f529-4224-a008-560c8920dc8f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4128ea2d-f529-4224-a008-560c8920dc8f\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.531721 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4128ea2d-f529-4224-a008-560c8920dc8f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4128ea2d-f529-4224-a008-560c8920dc8f\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.531757 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4128ea2d-f529-4224-a008-560c8920dc8f-logs\") pod \"glance-default-internal-api-0\" (UID: \"4128ea2d-f529-4224-a008-560c8920dc8f\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.532672 4667 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"4128ea2d-f529-4224-a008-560c8920dc8f\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.533316 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4128ea2d-f529-4224-a008-560c8920dc8f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4128ea2d-f529-4224-a008-560c8920dc8f\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.533950 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4128ea2d-f529-4224-a008-560c8920dc8f-logs\") pod \"glance-default-internal-api-0\" (UID: \"4128ea2d-f529-4224-a008-560c8920dc8f\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.540535 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4128ea2d-f529-4224-a008-560c8920dc8f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4128ea2d-f529-4224-a008-560c8920dc8f\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.543276 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4128ea2d-f529-4224-a008-560c8920dc8f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4128ea2d-f529-4224-a008-560c8920dc8f\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.544414 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4128ea2d-f529-4224-a008-560c8920dc8f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4128ea2d-f529-4224-a008-560c8920dc8f\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.545766 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4128ea2d-f529-4224-a008-560c8920dc8f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4128ea2d-f529-4224-a008-560c8920dc8f\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.573964 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x6jm\" (UniqueName: \"kubernetes.io/projected/4128ea2d-f529-4224-a008-560c8920dc8f-kube-api-access-5x6jm\") pod \"glance-default-internal-api-0\" (UID: \"4128ea2d-f529-4224-a008-560c8920dc8f\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.585206 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-w9cj2"] Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.586584 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-w9cj2" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.591979 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-rw7d7" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.591987 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.592350 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.592481 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.592621 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.609801 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-w9cj2"] Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.610102 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"4128ea2d-f529-4224-a008-560c8920dc8f\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.738571 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.741595 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d7dc1b5-7662-4687-964b-b3e21fce9e06-scripts\") pod \"keystone-bootstrap-w9cj2\" (UID: \"8d7dc1b5-7662-4687-964b-b3e21fce9e06\") " pod="openstack/keystone-bootstrap-w9cj2" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.741668 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8d7dc1b5-7662-4687-964b-b3e21fce9e06-fernet-keys\") pod \"keystone-bootstrap-w9cj2\" (UID: \"8d7dc1b5-7662-4687-964b-b3e21fce9e06\") " pod="openstack/keystone-bootstrap-w9cj2" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.741714 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d7dc1b5-7662-4687-964b-b3e21fce9e06-combined-ca-bundle\") pod \"keystone-bootstrap-w9cj2\" (UID: \"8d7dc1b5-7662-4687-964b-b3e21fce9e06\") " pod="openstack/keystone-bootstrap-w9cj2" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.741764 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d7dc1b5-7662-4687-964b-b3e21fce9e06-config-data\") pod \"keystone-bootstrap-w9cj2\" (UID: \"8d7dc1b5-7662-4687-964b-b3e21fce9e06\") " pod="openstack/keystone-bootstrap-w9cj2" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.741788 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c2fc\" (UniqueName: \"kubernetes.io/projected/8d7dc1b5-7662-4687-964b-b3e21fce9e06-kube-api-access-4c2fc\") pod \"keystone-bootstrap-w9cj2\" (UID: \"8d7dc1b5-7662-4687-964b-b3e21fce9e06\") " pod="openstack/keystone-bootstrap-w9cj2" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.741815 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8d7dc1b5-7662-4687-964b-b3e21fce9e06-credential-keys\") pod \"keystone-bootstrap-w9cj2\" (UID: \"8d7dc1b5-7662-4687-964b-b3e21fce9e06\") " pod="openstack/keystone-bootstrap-w9cj2" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.843545 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d7dc1b5-7662-4687-964b-b3e21fce9e06-scripts\") pod \"keystone-bootstrap-w9cj2\" (UID: \"8d7dc1b5-7662-4687-964b-b3e21fce9e06\") " pod="openstack/keystone-bootstrap-w9cj2" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.843633 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8d7dc1b5-7662-4687-964b-b3e21fce9e06-fernet-keys\") pod \"keystone-bootstrap-w9cj2\" (UID: \"8d7dc1b5-7662-4687-964b-b3e21fce9e06\") " pod="openstack/keystone-bootstrap-w9cj2" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.843678 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d7dc1b5-7662-4687-964b-b3e21fce9e06-combined-ca-bundle\") pod \"keystone-bootstrap-w9cj2\" (UID: \"8d7dc1b5-7662-4687-964b-b3e21fce9e06\") " pod="openstack/keystone-bootstrap-w9cj2" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.843720 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d7dc1b5-7662-4687-964b-b3e21fce9e06-config-data\") pod \"keystone-bootstrap-w9cj2\" (UID: \"8d7dc1b5-7662-4687-964b-b3e21fce9e06\") " pod="openstack/keystone-bootstrap-w9cj2" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.843750 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4c2fc\" (UniqueName: \"kubernetes.io/projected/8d7dc1b5-7662-4687-964b-b3e21fce9e06-kube-api-access-4c2fc\") pod \"keystone-bootstrap-w9cj2\" (UID: \"8d7dc1b5-7662-4687-964b-b3e21fce9e06\") " pod="openstack/keystone-bootstrap-w9cj2" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.843778 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8d7dc1b5-7662-4687-964b-b3e21fce9e06-credential-keys\") pod \"keystone-bootstrap-w9cj2\" (UID: \"8d7dc1b5-7662-4687-964b-b3e21fce9e06\") " pod="openstack/keystone-bootstrap-w9cj2" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.848940 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8d7dc1b5-7662-4687-964b-b3e21fce9e06-credential-keys\") pod \"keystone-bootstrap-w9cj2\" (UID: \"8d7dc1b5-7662-4687-964b-b3e21fce9e06\") " pod="openstack/keystone-bootstrap-w9cj2" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.849056 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d7dc1b5-7662-4687-964b-b3e21fce9e06-scripts\") pod \"keystone-bootstrap-w9cj2\" (UID: \"8d7dc1b5-7662-4687-964b-b3e21fce9e06\") " pod="openstack/keystone-bootstrap-w9cj2" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.849329 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8d7dc1b5-7662-4687-964b-b3e21fce9e06-fernet-keys\") pod \"keystone-bootstrap-w9cj2\" (UID: \"8d7dc1b5-7662-4687-964b-b3e21fce9e06\") " pod="openstack/keystone-bootstrap-w9cj2" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.849677 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d7dc1b5-7662-4687-964b-b3e21fce9e06-combined-ca-bundle\") pod \"keystone-bootstrap-w9cj2\" (UID: \"8d7dc1b5-7662-4687-964b-b3e21fce9e06\") " pod="openstack/keystone-bootstrap-w9cj2" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.853080 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d7dc1b5-7662-4687-964b-b3e21fce9e06-config-data\") pod \"keystone-bootstrap-w9cj2\" (UID: \"8d7dc1b5-7662-4687-964b-b3e21fce9e06\") " pod="openstack/keystone-bootstrap-w9cj2" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.864172 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4c2fc\" (UniqueName: \"kubernetes.io/projected/8d7dc1b5-7662-4687-964b-b3e21fce9e06-kube-api-access-4c2fc\") pod \"keystone-bootstrap-w9cj2\" (UID: \"8d7dc1b5-7662-4687-964b-b3e21fce9e06\") " pod="openstack/keystone-bootstrap-w9cj2" Jan 31 04:07:04 crc kubenswrapper[4667]: I0131 04:07:04.953384 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-w9cj2" Jan 31 04:07:05 crc kubenswrapper[4667]: I0131 04:07:05.244157 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-9n7xp" Jan 31 04:07:05 crc kubenswrapper[4667]: E0131 04:07:05.247644 4667 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 31 04:07:05 crc kubenswrapper[4667]: E0131 04:07:05.247943 4667 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5c8h7fh8fh54hch56bh58fh64h589h65ch678h688h5c7h67h5bch6h579hbfh56dh674h65dh656h676hd9h5h665h64bh676h59dh99h675h4q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tjsl6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6b97688f77-kkzs5_openstack(15d05b5e-da40-49f4-8556-f6a192a9f776): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 04:07:05 crc kubenswrapper[4667]: E0131 04:07:05.250688 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6b97688f77-kkzs5" podUID="15d05b5e-da40-49f4-8556-f6a192a9f776" Jan 31 04:07:05 crc kubenswrapper[4667]: E0131 04:07:05.251133 4667 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 31 04:07:05 crc kubenswrapper[4667]: E0131 04:07:05.251341 4667 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n56ch5b6h87h58fh655h675h97h687h5ffh5dfh74h67ch76h685h55bh57fh5bdhf5h7ch688hd8h54h95h85h9hbdh5b9hf9h574h5fbh5cdh96q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bht9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(c175848a-4645-42e7-8ccc-ab873e1ff7aa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 04:07:05 crc kubenswrapper[4667]: I0131 04:07:05.339325 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c3512ca-ae09-4231-920d-7b0694041097" path="/var/lib/kubelet/pods/1c3512ca-ae09-4231-920d-7b0694041097/volumes" Jan 31 04:07:05 crc kubenswrapper[4667]: I0131 04:07:05.340044 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ed264cf-9f83-47d5-8291-95ee4838616f" path="/var/lib/kubelet/pods/5ed264cf-9f83-47d5-8291-95ee4838616f/volumes" Jan 31 04:07:05 crc kubenswrapper[4667]: I0131 04:07:05.340580 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f4b88f0-e927-43e5-941e-1c431fb7269c" path="/var/lib/kubelet/pods/6f4b88f0-e927-43e5-941e-1c431fb7269c/volumes" Jan 31 04:07:05 crc kubenswrapper[4667]: I0131 04:07:05.351807 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1-dns-svc\") pod \"ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1\" (UID: \"ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1\") " Jan 31 04:07:05 crc kubenswrapper[4667]: I0131 04:07:05.351926 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1-ovsdbserver-nb\") pod \"ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1\" (UID: \"ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1\") " Jan 31 04:07:05 crc kubenswrapper[4667]: I0131 04:07:05.351957 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1-dns-swift-storage-0\") pod \"ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1\" (UID: \"ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1\") " Jan 31 04:07:05 crc kubenswrapper[4667]: I0131 04:07:05.352013 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1-ovsdbserver-sb\") pod \"ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1\" (UID: \"ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1\") " Jan 31 04:07:05 crc kubenswrapper[4667]: I0131 04:07:05.352077 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmgtx\" (UniqueName: \"kubernetes.io/projected/ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1-kube-api-access-qmgtx\") pod \"ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1\" (UID: \"ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1\") " Jan 31 04:07:05 crc kubenswrapper[4667]: I0131 04:07:05.352169 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1-config\") pod \"ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1\" (UID: \"ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1\") " Jan 31 04:07:05 crc kubenswrapper[4667]: I0131 04:07:05.366830 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1-kube-api-access-qmgtx" (OuterVolumeSpecName: "kube-api-access-qmgtx") pod "ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1" (UID: "ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1"). InnerVolumeSpecName "kube-api-access-qmgtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:07:05 crc kubenswrapper[4667]: I0131 04:07:05.419551 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1" (UID: "ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:07:05 crc kubenswrapper[4667]: I0131 04:07:05.428089 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1" (UID: "ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:07:05 crc kubenswrapper[4667]: I0131 04:07:05.431765 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1-config" (OuterVolumeSpecName: "config") pod "ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1" (UID: "ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:07:05 crc kubenswrapper[4667]: I0131 04:07:05.434385 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1" (UID: "ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:07:05 crc kubenswrapper[4667]: I0131 04:07:05.441571 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1" (UID: "ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:07:05 crc kubenswrapper[4667]: I0131 04:07:05.458975 4667 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:05 crc kubenswrapper[4667]: I0131 04:07:05.459005 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmgtx\" (UniqueName: \"kubernetes.io/projected/ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1-kube-api-access-qmgtx\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:05 crc kubenswrapper[4667]: I0131 04:07:05.459016 4667 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:05 crc kubenswrapper[4667]: I0131 04:07:05.459025 4667 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:05 crc kubenswrapper[4667]: I0131 04:07:05.459034 4667 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:05 crc kubenswrapper[4667]: I0131 04:07:05.459042 4667 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:06 crc kubenswrapper[4667]: I0131 04:07:06.126984 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-9n7xp" Jan 31 04:07:06 crc kubenswrapper[4667]: I0131 04:07:06.127144 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-9n7xp" event={"ID":"ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1","Type":"ContainerDied","Data":"a78ecca05c334fbd01b9b014d0bff08872e28937787b58779d937f727ac0a660"} Jan 31 04:07:06 crc kubenswrapper[4667]: I0131 04:07:06.197254 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-9n7xp"] Jan 31 04:07:06 crc kubenswrapper[4667]: I0131 04:07:06.232880 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-9n7xp"] Jan 31 04:07:07 crc kubenswrapper[4667]: I0131 04:07:07.292256 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1" path="/var/lib/kubelet/pods/ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1/volumes" Jan 31 04:07:09 crc kubenswrapper[4667]: I0131 04:07:09.087126 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-9n7xp" podUID="ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: i/o timeout" Jan 31 04:07:10 crc kubenswrapper[4667]: I0131 04:07:10.204314 4667 generic.go:334] "Generic (PLEG): container finished" podID="b58d4b49-fb58-480e-9a43-2675ce1fc0c1" containerID="49e6db4ed43ae57cf2c262faf58a2c2951eec05b799f08020e9a3f1a594aeac7" exitCode=0 Jan 31 04:07:10 crc kubenswrapper[4667]: I0131 04:07:10.204523 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vvkc5" event={"ID":"b58d4b49-fb58-480e-9a43-2675ce1fc0c1","Type":"ContainerDied","Data":"49e6db4ed43ae57cf2c262faf58a2c2951eec05b799f08020e9a3f1a594aeac7"} Jan 31 04:07:17 crc kubenswrapper[4667]: I0131 04:07:17.494858 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-85bb8b576c-j4h8q" Jan 31 04:07:17 crc kubenswrapper[4667]: I0131 04:07:17.503536 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-559945cfdf-qwr2k" Jan 31 04:07:17 crc kubenswrapper[4667]: I0131 04:07:17.539366 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkxc2\" (UniqueName: \"kubernetes.io/projected/ec50b0b7-497c-4bd1-a031-007dbd616e3c-kube-api-access-vkxc2\") pod \"ec50b0b7-497c-4bd1-a031-007dbd616e3c\" (UID: \"ec50b0b7-497c-4bd1-a031-007dbd616e3c\") " Jan 31 04:07:17 crc kubenswrapper[4667]: I0131 04:07:17.540018 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec50b0b7-497c-4bd1-a031-007dbd616e3c-config-data\") pod \"ec50b0b7-497c-4bd1-a031-007dbd616e3c\" (UID: \"ec50b0b7-497c-4bd1-a031-007dbd616e3c\") " Jan 31 04:07:17 crc kubenswrapper[4667]: I0131 04:07:17.541489 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec50b0b7-497c-4bd1-a031-007dbd616e3c-config-data" (OuterVolumeSpecName: "config-data") pod "ec50b0b7-497c-4bd1-a031-007dbd616e3c" (UID: "ec50b0b7-497c-4bd1-a031-007dbd616e3c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:07:17 crc kubenswrapper[4667]: I0131 04:07:17.541562 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec50b0b7-497c-4bd1-a031-007dbd616e3c-scripts\") pod \"ec50b0b7-497c-4bd1-a031-007dbd616e3c\" (UID: \"ec50b0b7-497c-4bd1-a031-007dbd616e3c\") " Jan 31 04:07:17 crc kubenswrapper[4667]: I0131 04:07:17.541737 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec50b0b7-497c-4bd1-a031-007dbd616e3c-logs\") pod \"ec50b0b7-497c-4bd1-a031-007dbd616e3c\" (UID: \"ec50b0b7-497c-4bd1-a031-007dbd616e3c\") " Jan 31 04:07:17 crc kubenswrapper[4667]: I0131 04:07:17.541822 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ec50b0b7-497c-4bd1-a031-007dbd616e3c-horizon-secret-key\") pod \"ec50b0b7-497c-4bd1-a031-007dbd616e3c\" (UID: \"ec50b0b7-497c-4bd1-a031-007dbd616e3c\") " Jan 31 04:07:17 crc kubenswrapper[4667]: I0131 04:07:17.542303 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec50b0b7-497c-4bd1-a031-007dbd616e3c-scripts" (OuterVolumeSpecName: "scripts") pod "ec50b0b7-497c-4bd1-a031-007dbd616e3c" (UID: "ec50b0b7-497c-4bd1-a031-007dbd616e3c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:07:17 crc kubenswrapper[4667]: I0131 04:07:17.542745 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec50b0b7-497c-4bd1-a031-007dbd616e3c-logs" (OuterVolumeSpecName: "logs") pod "ec50b0b7-497c-4bd1-a031-007dbd616e3c" (UID: "ec50b0b7-497c-4bd1-a031-007dbd616e3c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:07:17 crc kubenswrapper[4667]: I0131 04:07:17.543254 4667 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec50b0b7-497c-4bd1-a031-007dbd616e3c-logs\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:17 crc kubenswrapper[4667]: I0131 04:07:17.543282 4667 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec50b0b7-497c-4bd1-a031-007dbd616e3c-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:17 crc kubenswrapper[4667]: I0131 04:07:17.543296 4667 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec50b0b7-497c-4bd1-a031-007dbd616e3c-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:17 crc kubenswrapper[4667]: I0131 04:07:17.555533 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec50b0b7-497c-4bd1-a031-007dbd616e3c-kube-api-access-vkxc2" (OuterVolumeSpecName: "kube-api-access-vkxc2") pod "ec50b0b7-497c-4bd1-a031-007dbd616e3c" (UID: "ec50b0b7-497c-4bd1-a031-007dbd616e3c"). InnerVolumeSpecName "kube-api-access-vkxc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:07:17 crc kubenswrapper[4667]: I0131 04:07:17.565410 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec50b0b7-497c-4bd1-a031-007dbd616e3c-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "ec50b0b7-497c-4bd1-a031-007dbd616e3c" (UID: "ec50b0b7-497c-4bd1-a031-007dbd616e3c"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:17 crc kubenswrapper[4667]: I0131 04:07:17.644770 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1636f31-13c4-4745-88a2-20ff71e46358-logs\") pod \"f1636f31-13c4-4745-88a2-20ff71e46358\" (UID: \"f1636f31-13c4-4745-88a2-20ff71e46358\") " Jan 31 04:07:17 crc kubenswrapper[4667]: I0131 04:07:17.645014 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1636f31-13c4-4745-88a2-20ff71e46358-scripts\") pod \"f1636f31-13c4-4745-88a2-20ff71e46358\" (UID: \"f1636f31-13c4-4745-88a2-20ff71e46358\") " Jan 31 04:07:17 crc kubenswrapper[4667]: I0131 04:07:17.645150 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wm8j\" (UniqueName: \"kubernetes.io/projected/f1636f31-13c4-4745-88a2-20ff71e46358-kube-api-access-8wm8j\") pod \"f1636f31-13c4-4745-88a2-20ff71e46358\" (UID: \"f1636f31-13c4-4745-88a2-20ff71e46358\") " Jan 31 04:07:17 crc kubenswrapper[4667]: I0131 04:07:17.645263 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1636f31-13c4-4745-88a2-20ff71e46358-config-data\") pod \"f1636f31-13c4-4745-88a2-20ff71e46358\" (UID: \"f1636f31-13c4-4745-88a2-20ff71e46358\") " Jan 31 04:07:17 crc kubenswrapper[4667]: I0131 04:07:17.645259 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1636f31-13c4-4745-88a2-20ff71e46358-logs" (OuterVolumeSpecName: "logs") pod "f1636f31-13c4-4745-88a2-20ff71e46358" (UID: "f1636f31-13c4-4745-88a2-20ff71e46358"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:07:17 crc kubenswrapper[4667]: I0131 04:07:17.645307 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f1636f31-13c4-4745-88a2-20ff71e46358-horizon-secret-key\") pod \"f1636f31-13c4-4745-88a2-20ff71e46358\" (UID: \"f1636f31-13c4-4745-88a2-20ff71e46358\") " Jan 31 04:07:17 crc kubenswrapper[4667]: I0131 04:07:17.645922 4667 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1636f31-13c4-4745-88a2-20ff71e46358-logs\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:17 crc kubenswrapper[4667]: I0131 04:07:17.645945 4667 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ec50b0b7-497c-4bd1-a031-007dbd616e3c-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:17 crc kubenswrapper[4667]: I0131 04:07:17.645961 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkxc2\" (UniqueName: \"kubernetes.io/projected/ec50b0b7-497c-4bd1-a031-007dbd616e3c-kube-api-access-vkxc2\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:17 crc kubenswrapper[4667]: I0131 04:07:17.646129 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1636f31-13c4-4745-88a2-20ff71e46358-scripts" (OuterVolumeSpecName: "scripts") pod "f1636f31-13c4-4745-88a2-20ff71e46358" (UID: "f1636f31-13c4-4745-88a2-20ff71e46358"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:07:17 crc kubenswrapper[4667]: I0131 04:07:17.646973 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1636f31-13c4-4745-88a2-20ff71e46358-config-data" (OuterVolumeSpecName: "config-data") pod "f1636f31-13c4-4745-88a2-20ff71e46358" (UID: "f1636f31-13c4-4745-88a2-20ff71e46358"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:07:17 crc kubenswrapper[4667]: I0131 04:07:17.673600 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1636f31-13c4-4745-88a2-20ff71e46358-kube-api-access-8wm8j" (OuterVolumeSpecName: "kube-api-access-8wm8j") pod "f1636f31-13c4-4745-88a2-20ff71e46358" (UID: "f1636f31-13c4-4745-88a2-20ff71e46358"). InnerVolumeSpecName "kube-api-access-8wm8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:07:17 crc kubenswrapper[4667]: I0131 04:07:17.675915 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1636f31-13c4-4745-88a2-20ff71e46358-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f1636f31-13c4-4745-88a2-20ff71e46358" (UID: "f1636f31-13c4-4745-88a2-20ff71e46358"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:17 crc kubenswrapper[4667]: I0131 04:07:17.747907 4667 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f1636f31-13c4-4745-88a2-20ff71e46358-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:17 crc kubenswrapper[4667]: I0131 04:07:17.747970 4667 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1636f31-13c4-4745-88a2-20ff71e46358-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:17 crc kubenswrapper[4667]: I0131 04:07:17.747990 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wm8j\" (UniqueName: \"kubernetes.io/projected/f1636f31-13c4-4745-88a2-20ff71e46358-kube-api-access-8wm8j\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:17 crc kubenswrapper[4667]: I0131 04:07:17.748005 4667 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1636f31-13c4-4745-88a2-20ff71e46358-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:18 crc kubenswrapper[4667]: I0131 04:07:18.330176 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-85bb8b576c-j4h8q" event={"ID":"ec50b0b7-497c-4bd1-a031-007dbd616e3c","Type":"ContainerDied","Data":"e8415ebc17ff236763ea92519f48854b0cd6001803361ad6df09f684c16993d4"} Jan 31 04:07:18 crc kubenswrapper[4667]: I0131 04:07:18.330348 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-85bb8b576c-j4h8q" Jan 31 04:07:18 crc kubenswrapper[4667]: I0131 04:07:18.337243 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-559945cfdf-qwr2k" event={"ID":"f1636f31-13c4-4745-88a2-20ff71e46358","Type":"ContainerDied","Data":"1f23cd9deaf2930e302cddb559d39075628ec54f8220249420db03a7015b7932"} Jan 31 04:07:18 crc kubenswrapper[4667]: I0131 04:07:18.340451 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-559945cfdf-qwr2k" Jan 31 04:07:18 crc kubenswrapper[4667]: I0131 04:07:18.473016 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-85bb8b576c-j4h8q"] Jan 31 04:07:18 crc kubenswrapper[4667]: I0131 04:07:18.485223 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-85bb8b576c-j4h8q"] Jan 31 04:07:18 crc kubenswrapper[4667]: I0131 04:07:18.503655 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-559945cfdf-qwr2k"] Jan 31 04:07:18 crc kubenswrapper[4667]: I0131 04:07:18.511998 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-559945cfdf-qwr2k"] Jan 31 04:07:18 crc kubenswrapper[4667]: E0131 04:07:18.655521 4667 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 31 04:07:18 crc kubenswrapper[4667]: E0131 04:07:18.655814 4667 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6t7c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-4nj2p_openstack(7b6bac61-1103-438b-9e75-f3d6b6902270): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 04:07:18 crc kubenswrapper[4667]: E0131 04:07:18.657055 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-4nj2p" podUID="7b6bac61-1103-438b-9e75-f3d6b6902270" Jan 31 04:07:18 crc kubenswrapper[4667]: I0131 04:07:18.727256 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b97688f77-kkzs5" Jan 31 04:07:18 crc kubenswrapper[4667]: I0131 04:07:18.736904 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vvkc5" Jan 31 04:07:18 crc kubenswrapper[4667]: I0131 04:07:18.793513 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/15d05b5e-da40-49f4-8556-f6a192a9f776-horizon-secret-key\") pod \"15d05b5e-da40-49f4-8556-f6a192a9f776\" (UID: \"15d05b5e-da40-49f4-8556-f6a192a9f776\") " Jan 31 04:07:18 crc kubenswrapper[4667]: I0131 04:07:18.794131 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjsl6\" (UniqueName: \"kubernetes.io/projected/15d05b5e-da40-49f4-8556-f6a192a9f776-kube-api-access-tjsl6\") pod \"15d05b5e-da40-49f4-8556-f6a192a9f776\" (UID: \"15d05b5e-da40-49f4-8556-f6a192a9f776\") " Jan 31 04:07:18 crc kubenswrapper[4667]: I0131 04:07:18.794290 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15d05b5e-da40-49f4-8556-f6a192a9f776-logs\") pod \"15d05b5e-da40-49f4-8556-f6a192a9f776\" (UID: \"15d05b5e-da40-49f4-8556-f6a192a9f776\") " Jan 31 04:07:18 crc kubenswrapper[4667]: I0131 04:07:18.794436 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/15d05b5e-da40-49f4-8556-f6a192a9f776-scripts\") pod \"15d05b5e-da40-49f4-8556-f6a192a9f776\" (UID: \"15d05b5e-da40-49f4-8556-f6a192a9f776\") " Jan 31 04:07:18 crc kubenswrapper[4667]: I0131 04:07:18.794542 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/15d05b5e-da40-49f4-8556-f6a192a9f776-config-data\") pod \"15d05b5e-da40-49f4-8556-f6a192a9f776\" (UID: \"15d05b5e-da40-49f4-8556-f6a192a9f776\") " Jan 31 04:07:18 crc kubenswrapper[4667]: I0131 04:07:18.794614 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15d05b5e-da40-49f4-8556-f6a192a9f776-logs" (OuterVolumeSpecName: "logs") pod "15d05b5e-da40-49f4-8556-f6a192a9f776" (UID: "15d05b5e-da40-49f4-8556-f6a192a9f776"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:07:18 crc kubenswrapper[4667]: I0131 04:07:18.795150 4667 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15d05b5e-da40-49f4-8556-f6a192a9f776-logs\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:18 crc kubenswrapper[4667]: I0131 04:07:18.795409 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15d05b5e-da40-49f4-8556-f6a192a9f776-scripts" (OuterVolumeSpecName: "scripts") pod "15d05b5e-da40-49f4-8556-f6a192a9f776" (UID: "15d05b5e-da40-49f4-8556-f6a192a9f776"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:07:18 crc kubenswrapper[4667]: I0131 04:07:18.795598 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15d05b5e-da40-49f4-8556-f6a192a9f776-config-data" (OuterVolumeSpecName: "config-data") pod "15d05b5e-da40-49f4-8556-f6a192a9f776" (UID: "15d05b5e-da40-49f4-8556-f6a192a9f776"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:07:18 crc kubenswrapper[4667]: I0131 04:07:18.801921 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15d05b5e-da40-49f4-8556-f6a192a9f776-kube-api-access-tjsl6" (OuterVolumeSpecName: "kube-api-access-tjsl6") pod "15d05b5e-da40-49f4-8556-f6a192a9f776" (UID: "15d05b5e-da40-49f4-8556-f6a192a9f776"). InnerVolumeSpecName "kube-api-access-tjsl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:07:18 crc kubenswrapper[4667]: I0131 04:07:18.802041 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15d05b5e-da40-49f4-8556-f6a192a9f776-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "15d05b5e-da40-49f4-8556-f6a192a9f776" (UID: "15d05b5e-da40-49f4-8556-f6a192a9f776"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:18 crc kubenswrapper[4667]: I0131 04:07:18.896578 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b58d4b49-fb58-480e-9a43-2675ce1fc0c1-config\") pod \"b58d4b49-fb58-480e-9a43-2675ce1fc0c1\" (UID: \"b58d4b49-fb58-480e-9a43-2675ce1fc0c1\") " Jan 31 04:07:18 crc kubenswrapper[4667]: I0131 04:07:18.896719 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b58d4b49-fb58-480e-9a43-2675ce1fc0c1-combined-ca-bundle\") pod \"b58d4b49-fb58-480e-9a43-2675ce1fc0c1\" (UID: \"b58d4b49-fb58-480e-9a43-2675ce1fc0c1\") " Jan 31 04:07:18 crc kubenswrapper[4667]: I0131 04:07:18.896869 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2btf5\" (UniqueName: \"kubernetes.io/projected/b58d4b49-fb58-480e-9a43-2675ce1fc0c1-kube-api-access-2btf5\") pod \"b58d4b49-fb58-480e-9a43-2675ce1fc0c1\" (UID: \"b58d4b49-fb58-480e-9a43-2675ce1fc0c1\") " Jan 31 04:07:18 crc kubenswrapper[4667]: I0131 04:07:18.897292 4667 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/15d05b5e-da40-49f4-8556-f6a192a9f776-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:18 crc kubenswrapper[4667]: I0131 04:07:18.897314 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjsl6\" (UniqueName: \"kubernetes.io/projected/15d05b5e-da40-49f4-8556-f6a192a9f776-kube-api-access-tjsl6\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:18 crc kubenswrapper[4667]: I0131 04:07:18.897331 4667 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/15d05b5e-da40-49f4-8556-f6a192a9f776-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:18 crc kubenswrapper[4667]: I0131 04:07:18.897348 4667 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/15d05b5e-da40-49f4-8556-f6a192a9f776-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:18 crc kubenswrapper[4667]: I0131 04:07:18.900902 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b58d4b49-fb58-480e-9a43-2675ce1fc0c1-kube-api-access-2btf5" (OuterVolumeSpecName: "kube-api-access-2btf5") pod "b58d4b49-fb58-480e-9a43-2675ce1fc0c1" (UID: "b58d4b49-fb58-480e-9a43-2675ce1fc0c1"). InnerVolumeSpecName "kube-api-access-2btf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:07:18 crc kubenswrapper[4667]: I0131 04:07:18.921591 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b58d4b49-fb58-480e-9a43-2675ce1fc0c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b58d4b49-fb58-480e-9a43-2675ce1fc0c1" (UID: "b58d4b49-fb58-480e-9a43-2675ce1fc0c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:18 crc kubenswrapper[4667]: I0131 04:07:18.922153 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b58d4b49-fb58-480e-9a43-2675ce1fc0c1-config" (OuterVolumeSpecName: "config") pod "b58d4b49-fb58-480e-9a43-2675ce1fc0c1" (UID: "b58d4b49-fb58-480e-9a43-2675ce1fc0c1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:18 crc kubenswrapper[4667]: I0131 04:07:18.998712 4667 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b58d4b49-fb58-480e-9a43-2675ce1fc0c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:18 crc kubenswrapper[4667]: I0131 04:07:18.998754 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2btf5\" (UniqueName: \"kubernetes.io/projected/b58d4b49-fb58-480e-9a43-2675ce1fc0c1-kube-api-access-2btf5\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:18 crc kubenswrapper[4667]: I0131 04:07:18.998768 4667 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b58d4b49-fb58-480e-9a43-2675ce1fc0c1-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:19 crc kubenswrapper[4667]: I0131 04:07:19.294043 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec50b0b7-497c-4bd1-a031-007dbd616e3c" path="/var/lib/kubelet/pods/ec50b0b7-497c-4bd1-a031-007dbd616e3c/volumes" Jan 31 04:07:19 crc kubenswrapper[4667]: I0131 04:07:19.294503 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1636f31-13c4-4745-88a2-20ff71e46358" path="/var/lib/kubelet/pods/f1636f31-13c4-4745-88a2-20ff71e46358/volumes" Jan 31 04:07:19 crc kubenswrapper[4667]: I0131 04:07:19.350207 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vvkc5" event={"ID":"b58d4b49-fb58-480e-9a43-2675ce1fc0c1","Type":"ContainerDied","Data":"00bd4d030dac7cc4f0362e2625d7c88f23ab4437b49e06790fc9a23801857411"} Jan 31 04:07:19 crc kubenswrapper[4667]: I0131 04:07:19.350245 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vvkc5" Jan 31 04:07:19 crc kubenswrapper[4667]: I0131 04:07:19.350265 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00bd4d030dac7cc4f0362e2625d7c88f23ab4437b49e06790fc9a23801857411" Jan 31 04:07:19 crc kubenswrapper[4667]: I0131 04:07:19.352707 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b97688f77-kkzs5" event={"ID":"15d05b5e-da40-49f4-8556-f6a192a9f776","Type":"ContainerDied","Data":"c5a634f4cbe1364fe2f96bfdb122bc8687f895e984f30e46f3fa058ddf8646c0"} Jan 31 04:07:19 crc kubenswrapper[4667]: I0131 04:07:19.352727 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b97688f77-kkzs5" Jan 31 04:07:19 crc kubenswrapper[4667]: E0131 04:07:19.355284 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-4nj2p" podUID="7b6bac61-1103-438b-9e75-f3d6b6902270" Jan 31 04:07:19 crc kubenswrapper[4667]: I0131 04:07:19.433996 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6b97688f77-kkzs5"] Jan 31 04:07:19 crc kubenswrapper[4667]: I0131 04:07:19.448994 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6b97688f77-kkzs5"] Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.029252 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-c58f7"] Jan 31 04:07:20 crc kubenswrapper[4667]: E0131 04:07:20.030182 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1" containerName="init" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.030203 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1" containerName="init" Jan 31 04:07:20 crc kubenswrapper[4667]: E0131 04:07:20.030225 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1" containerName="dnsmasq-dns" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.030233 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1" containerName="dnsmasq-dns" Jan 31 04:07:20 crc kubenswrapper[4667]: E0131 04:07:20.030242 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b58d4b49-fb58-480e-9a43-2675ce1fc0c1" containerName="neutron-db-sync" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.030251 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="b58d4b49-fb58-480e-9a43-2675ce1fc0c1" containerName="neutron-db-sync" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.030467 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccdb041b-25b4-4871-9b8b-7ac2b57e3ec1" containerName="dnsmasq-dns" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.030496 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="b58d4b49-fb58-480e-9a43-2675ce1fc0c1" containerName="neutron-db-sync" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.031420 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-c58f7" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.047244 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-c58f7"] Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.132866 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23ada731-7288-4699-9ae2-d1bde47a02a2-config\") pod \"dnsmasq-dns-55f844cf75-c58f7\" (UID: \"23ada731-7288-4699-9ae2-d1bde47a02a2\") " pod="openstack/dnsmasq-dns-55f844cf75-c58f7" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.132955 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23ada731-7288-4699-9ae2-d1bde47a02a2-dns-svc\") pod \"dnsmasq-dns-55f844cf75-c58f7\" (UID: \"23ada731-7288-4699-9ae2-d1bde47a02a2\") " pod="openstack/dnsmasq-dns-55f844cf75-c58f7" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.133012 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23ada731-7288-4699-9ae2-d1bde47a02a2-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-c58f7\" (UID: \"23ada731-7288-4699-9ae2-d1bde47a02a2\") " pod="openstack/dnsmasq-dns-55f844cf75-c58f7" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.133079 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23ada731-7288-4699-9ae2-d1bde47a02a2-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-c58f7\" (UID: \"23ada731-7288-4699-9ae2-d1bde47a02a2\") " pod="openstack/dnsmasq-dns-55f844cf75-c58f7" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.133097 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srxvk\" (UniqueName: \"kubernetes.io/projected/23ada731-7288-4699-9ae2-d1bde47a02a2-kube-api-access-srxvk\") pod \"dnsmasq-dns-55f844cf75-c58f7\" (UID: \"23ada731-7288-4699-9ae2-d1bde47a02a2\") " pod="openstack/dnsmasq-dns-55f844cf75-c58f7" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.133116 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23ada731-7288-4699-9ae2-d1bde47a02a2-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-c58f7\" (UID: \"23ada731-7288-4699-9ae2-d1bde47a02a2\") " pod="openstack/dnsmasq-dns-55f844cf75-c58f7" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.168295 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-746c944c96-t4g84"] Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.170346 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-746c944c96-t4g84" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.179182 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-hlkh6" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.179408 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.185234 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.185449 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.213936 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-746c944c96-t4g84"] Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.235580 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23ada731-7288-4699-9ae2-d1bde47a02a2-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-c58f7\" (UID: \"23ada731-7288-4699-9ae2-d1bde47a02a2\") " pod="openstack/dnsmasq-dns-55f844cf75-c58f7" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.235639 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srxvk\" (UniqueName: \"kubernetes.io/projected/23ada731-7288-4699-9ae2-d1bde47a02a2-kube-api-access-srxvk\") pod \"dnsmasq-dns-55f844cf75-c58f7\" (UID: \"23ada731-7288-4699-9ae2-d1bde47a02a2\") " pod="openstack/dnsmasq-dns-55f844cf75-c58f7" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.235667 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23ada731-7288-4699-9ae2-d1bde47a02a2-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-c58f7\" (UID: \"23ada731-7288-4699-9ae2-d1bde47a02a2\") " pod="openstack/dnsmasq-dns-55f844cf75-c58f7" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.235725 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23ada731-7288-4699-9ae2-d1bde47a02a2-config\") pod \"dnsmasq-dns-55f844cf75-c58f7\" (UID: \"23ada731-7288-4699-9ae2-d1bde47a02a2\") " pod="openstack/dnsmasq-dns-55f844cf75-c58f7" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.235772 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23ada731-7288-4699-9ae2-d1bde47a02a2-dns-svc\") pod \"dnsmasq-dns-55f844cf75-c58f7\" (UID: \"23ada731-7288-4699-9ae2-d1bde47a02a2\") " pod="openstack/dnsmasq-dns-55f844cf75-c58f7" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.235821 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23ada731-7288-4699-9ae2-d1bde47a02a2-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-c58f7\" (UID: \"23ada731-7288-4699-9ae2-d1bde47a02a2\") " pod="openstack/dnsmasq-dns-55f844cf75-c58f7" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.237009 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23ada731-7288-4699-9ae2-d1bde47a02a2-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-c58f7\" (UID: \"23ada731-7288-4699-9ae2-d1bde47a02a2\") " pod="openstack/dnsmasq-dns-55f844cf75-c58f7" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.237561 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23ada731-7288-4699-9ae2-d1bde47a02a2-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-c58f7\" (UID: \"23ada731-7288-4699-9ae2-d1bde47a02a2\") " pod="openstack/dnsmasq-dns-55f844cf75-c58f7" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.238460 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23ada731-7288-4699-9ae2-d1bde47a02a2-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-c58f7\" (UID: \"23ada731-7288-4699-9ae2-d1bde47a02a2\") " pod="openstack/dnsmasq-dns-55f844cf75-c58f7" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.239237 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23ada731-7288-4699-9ae2-d1bde47a02a2-config\") pod \"dnsmasq-dns-55f844cf75-c58f7\" (UID: \"23ada731-7288-4699-9ae2-d1bde47a02a2\") " pod="openstack/dnsmasq-dns-55f844cf75-c58f7" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.239321 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23ada731-7288-4699-9ae2-d1bde47a02a2-dns-svc\") pod \"dnsmasq-dns-55f844cf75-c58f7\" (UID: \"23ada731-7288-4699-9ae2-d1bde47a02a2\") " pod="openstack/dnsmasq-dns-55f844cf75-c58f7" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.270987 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srxvk\" (UniqueName: \"kubernetes.io/projected/23ada731-7288-4699-9ae2-d1bde47a02a2-kube-api-access-srxvk\") pod \"dnsmasq-dns-55f844cf75-c58f7\" (UID: \"23ada731-7288-4699-9ae2-d1bde47a02a2\") " pod="openstack/dnsmasq-dns-55f844cf75-c58f7" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.337202 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mmw4\" (UniqueName: \"kubernetes.io/projected/43842154-1666-491b-b37a-061c1a7c2b90-kube-api-access-7mmw4\") pod \"neutron-746c944c96-t4g84\" (UID: \"43842154-1666-491b-b37a-061c1a7c2b90\") " pod="openstack/neutron-746c944c96-t4g84" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.337408 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/43842154-1666-491b-b37a-061c1a7c2b90-httpd-config\") pod \"neutron-746c944c96-t4g84\" (UID: \"43842154-1666-491b-b37a-061c1a7c2b90\") " pod="openstack/neutron-746c944c96-t4g84" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.337514 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/43842154-1666-491b-b37a-061c1a7c2b90-ovndb-tls-certs\") pod \"neutron-746c944c96-t4g84\" (UID: \"43842154-1666-491b-b37a-061c1a7c2b90\") " pod="openstack/neutron-746c944c96-t4g84" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.337806 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43842154-1666-491b-b37a-061c1a7c2b90-combined-ca-bundle\") pod \"neutron-746c944c96-t4g84\" (UID: \"43842154-1666-491b-b37a-061c1a7c2b90\") " pod="openstack/neutron-746c944c96-t4g84" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.338171 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/43842154-1666-491b-b37a-061c1a7c2b90-config\") pod \"neutron-746c944c96-t4g84\" (UID: \"43842154-1666-491b-b37a-061c1a7c2b90\") " pod="openstack/neutron-746c944c96-t4g84" Jan 31 04:07:20 crc kubenswrapper[4667]: E0131 04:07:20.364058 4667 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 31 04:07:20 crc kubenswrapper[4667]: E0131 04:07:20.364373 4667 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6gtrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-245c9_openstack(bc9db8ae-2f60-4efd-9a11-4aac5f336900): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.365035 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-c58f7" Jan 31 04:07:20 crc kubenswrapper[4667]: E0131 04:07:20.366716 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-245c9" podUID="bc9db8ae-2f60-4efd-9a11-4aac5f336900" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.386458 4667 scope.go:117] "RemoveContainer" containerID="94750ae3e2a7775053bc506dba7c793f1f9b54228fe142b1885b84d78ccfe343" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.441690 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mmw4\" (UniqueName: \"kubernetes.io/projected/43842154-1666-491b-b37a-061c1a7c2b90-kube-api-access-7mmw4\") pod \"neutron-746c944c96-t4g84\" (UID: \"43842154-1666-491b-b37a-061c1a7c2b90\") " pod="openstack/neutron-746c944c96-t4g84" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.441784 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/43842154-1666-491b-b37a-061c1a7c2b90-httpd-config\") pod \"neutron-746c944c96-t4g84\" (UID: \"43842154-1666-491b-b37a-061c1a7c2b90\") " pod="openstack/neutron-746c944c96-t4g84" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.441809 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/43842154-1666-491b-b37a-061c1a7c2b90-ovndb-tls-certs\") pod \"neutron-746c944c96-t4g84\" (UID: \"43842154-1666-491b-b37a-061c1a7c2b90\") " pod="openstack/neutron-746c944c96-t4g84" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.441856 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43842154-1666-491b-b37a-061c1a7c2b90-combined-ca-bundle\") pod \"neutron-746c944c96-t4g84\" (UID: \"43842154-1666-491b-b37a-061c1a7c2b90\") " pod="openstack/neutron-746c944c96-t4g84" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.441909 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/43842154-1666-491b-b37a-061c1a7c2b90-config\") pod \"neutron-746c944c96-t4g84\" (UID: \"43842154-1666-491b-b37a-061c1a7c2b90\") " pod="openstack/neutron-746c944c96-t4g84" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.451137 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/43842154-1666-491b-b37a-061c1a7c2b90-ovndb-tls-certs\") pod \"neutron-746c944c96-t4g84\" (UID: \"43842154-1666-491b-b37a-061c1a7c2b90\") " pod="openstack/neutron-746c944c96-t4g84" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.452066 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43842154-1666-491b-b37a-061c1a7c2b90-combined-ca-bundle\") pod \"neutron-746c944c96-t4g84\" (UID: \"43842154-1666-491b-b37a-061c1a7c2b90\") " pod="openstack/neutron-746c944c96-t4g84" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.453428 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/43842154-1666-491b-b37a-061c1a7c2b90-httpd-config\") pod \"neutron-746c944c96-t4g84\" (UID: \"43842154-1666-491b-b37a-061c1a7c2b90\") " pod="openstack/neutron-746c944c96-t4g84" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.459826 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mmw4\" (UniqueName: \"kubernetes.io/projected/43842154-1666-491b-b37a-061c1a7c2b90-kube-api-access-7mmw4\") pod \"neutron-746c944c96-t4g84\" (UID: \"43842154-1666-491b-b37a-061c1a7c2b90\") " pod="openstack/neutron-746c944c96-t4g84" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.461043 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/43842154-1666-491b-b37a-061c1a7c2b90-config\") pod \"neutron-746c944c96-t4g84\" (UID: \"43842154-1666-491b-b37a-061c1a7c2b90\") " pod="openstack/neutron-746c944c96-t4g84" Jan 31 04:07:20 crc kubenswrapper[4667]: I0131 04:07:20.489009 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-746c944c96-t4g84" Jan 31 04:07:21 crc kubenswrapper[4667]: I0131 04:07:21.057354 4667 scope.go:117] "RemoveContainer" containerID="7b99f67ddc65ec7524cf9870677db218127e04e2e6679b99a2d7e70da0369518" Jan 31 04:07:21 crc kubenswrapper[4667]: I0131 04:07:21.188193 4667 scope.go:117] "RemoveContainer" containerID="000d672a56472d3246474cbd083ffa6d5135b5dfd347ea32f166923b80992033" Jan 31 04:07:21 crc kubenswrapper[4667]: I0131 04:07:21.278112 4667 scope.go:117] "RemoveContainer" containerID="6f17dcd2e4af8382cd7b790476bde52a1a9da0e2ad82b484738e0fc7f69ea4b3" Jan 31 04:07:21 crc kubenswrapper[4667]: I0131 04:07:21.307436 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15d05b5e-da40-49f4-8556-f6a192a9f776" path="/var/lib/kubelet/pods/15d05b5e-da40-49f4-8556-f6a192a9f776/volumes" Jan 31 04:07:21 crc kubenswrapper[4667]: I0131 04:07:21.391743 4667 scope.go:117] "RemoveContainer" containerID="8779413dc87b5848812a23d54aac7d5441b5b6978b0b62e0a87cf2ee4da33894" Jan 31 04:07:21 crc kubenswrapper[4667]: E0131 04:07:21.434157 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-245c9" podUID="bc9db8ae-2f60-4efd-9a11-4aac5f336900" Jan 31 04:07:21 crc kubenswrapper[4667]: I0131 04:07:21.482739 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-w9cj2"] Jan 31 04:07:21 crc kubenswrapper[4667]: I0131 04:07:21.519441 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-78789d8f44-5trmc"] Jan 31 04:07:21 crc kubenswrapper[4667]: I0131 04:07:21.579881 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-86c748c4d6-2grmh"] Jan 31 04:07:21 crc kubenswrapper[4667]: W0131 04:07:21.602561 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7f8fd18_06a0_432e_8c17_c9b432b6ca69.slice/crio-8ace758f8f2a613c4be26a8aa6c951a1305e5bc3c5dfe51cf0ff083c4782b235 WatchSource:0}: Error finding container 8ace758f8f2a613c4be26a8aa6c951a1305e5bc3c5dfe51cf0ff083c4782b235: Status 404 returned error can't find the container with id 8ace758f8f2a613c4be26a8aa6c951a1305e5bc3c5dfe51cf0ff083c4782b235 Jan 31 04:07:22 crc kubenswrapper[4667]: I0131 04:07:22.003994 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-c58f7"] Jan 31 04:07:22 crc kubenswrapper[4667]: I0131 04:07:22.186412 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 04:07:22 crc kubenswrapper[4667]: I0131 04:07:22.306082 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 04:07:22 crc kubenswrapper[4667]: W0131 04:07:22.324764 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75c7336f_29b1_4a8a_88c1_69eec14a92b7.slice/crio-73ec87c4c7504f6ec9d3cf013a4540adbf843b70a0166fbfbab419c322c0ac92 WatchSource:0}: Error finding container 73ec87c4c7504f6ec9d3cf013a4540adbf843b70a0166fbfbab419c322c0ac92: Status 404 returned error can't find the container with id 73ec87c4c7504f6ec9d3cf013a4540adbf843b70a0166fbfbab419c322c0ac92 Jan 31 04:07:22 crc kubenswrapper[4667]: I0131 04:07:22.461312 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78789d8f44-5trmc" event={"ID":"b7f8fd18-06a0-432e-8c17-c9b432b6ca69","Type":"ContainerStarted","Data":"8ace758f8f2a613c4be26a8aa6c951a1305e5bc3c5dfe51cf0ff083c4782b235"} Jan 31 04:07:22 crc kubenswrapper[4667]: I0131 04:07:22.467008 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-c58f7" event={"ID":"23ada731-7288-4699-9ae2-d1bde47a02a2","Type":"ContainerStarted","Data":"8bf14bbfb0697cd8ca5db878235e1722f6a7221626113ed0a8b3441f92987685"} Jan 31 04:07:22 crc kubenswrapper[4667]: I0131 04:07:22.477616 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-mkdm4" event={"ID":"6e23be1c-6ab2-442e-b12e-e4083c274a67","Type":"ContainerStarted","Data":"a2ba612c47c6a1009fc72ff61e30d9cf1ee4813472358fc9f7e831e6c727188b"} Jan 31 04:07:22 crc kubenswrapper[4667]: I0131 04:07:22.492067 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4128ea2d-f529-4224-a008-560c8920dc8f","Type":"ContainerStarted","Data":"1fdcfe3dafd53602d2a7af5ece97f1454b7a5bd73f01624bd6aeb6bb8df9b5ec"} Jan 31 04:07:22 crc kubenswrapper[4667]: I0131 04:07:22.502695 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-w9cj2" event={"ID":"8d7dc1b5-7662-4687-964b-b3e21fce9e06","Type":"ContainerStarted","Data":"91f50a9c2580148fd54f2e67117a35d9e045cbb9f43ff636b97ffca306a85772"} Jan 31 04:07:22 crc kubenswrapper[4667]: I0131 04:07:22.516017 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-mkdm4" podStartSLOduration=4.682603798 podStartE2EDuration="43.515983109s" podCreationTimestamp="2026-01-31 04:06:39 +0000 UTC" firstStartedPulling="2026-01-31 04:06:41.496401105 +0000 UTC m=+1125.012736394" lastFinishedPulling="2026-01-31 04:07:20.329780406 +0000 UTC m=+1163.846115705" observedRunningTime="2026-01-31 04:07:22.506406215 +0000 UTC m=+1166.022741514" watchObservedRunningTime="2026-01-31 04:07:22.515983109 +0000 UTC m=+1166.032318408" Jan 31 04:07:22 crc kubenswrapper[4667]: I0131 04:07:22.525982 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86c748c4d6-2grmh" event={"ID":"c6974567-3bea-447a-bb8b-ced22b6d34ce","Type":"ContainerStarted","Data":"d3d54de90ae761f2b9b5b81d6dbe6feb5d9bcc786aa33a713358f243cf1d9453"} Jan 31 04:07:22 crc kubenswrapper[4667]: I0131 04:07:22.532290 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c175848a-4645-42e7-8ccc-ab873e1ff7aa","Type":"ContainerStarted","Data":"5f3054a5c6f2254b318f9d5a214799bad34be30fa6a4ea2f5072c54c61f95f3b"} Jan 31 04:07:22 crc kubenswrapper[4667]: I0131 04:07:22.536422 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"75c7336f-29b1-4a8a-88c1-69eec14a92b7","Type":"ContainerStarted","Data":"73ec87c4c7504f6ec9d3cf013a4540adbf843b70a0166fbfbab419c322c0ac92"} Jan 31 04:07:23 crc kubenswrapper[4667]: I0131 04:07:23.019036 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-746c944c96-t4g84"] Jan 31 04:07:23 crc kubenswrapper[4667]: I0131 04:07:23.052036 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7dc9f74cdf-w757n"] Jan 31 04:07:23 crc kubenswrapper[4667]: I0131 04:07:23.072192 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7dc9f74cdf-w757n" Jan 31 04:07:23 crc kubenswrapper[4667]: I0131 04:07:23.078597 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7dc9f74cdf-w757n"] Jan 31 04:07:23 crc kubenswrapper[4667]: I0131 04:07:23.079373 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 31 04:07:23 crc kubenswrapper[4667]: I0131 04:07:23.081765 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 31 04:07:23 crc kubenswrapper[4667]: I0131 04:07:23.114561 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0de14766-3b67-45ce-a8d8-276f90ce6310-combined-ca-bundle\") pod \"neutron-7dc9f74cdf-w757n\" (UID: \"0de14766-3b67-45ce-a8d8-276f90ce6310\") " pod="openstack/neutron-7dc9f74cdf-w757n" Jan 31 04:07:23 crc kubenswrapper[4667]: I0131 04:07:23.114683 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0de14766-3b67-45ce-a8d8-276f90ce6310-config\") pod \"neutron-7dc9f74cdf-w757n\" (UID: \"0de14766-3b67-45ce-a8d8-276f90ce6310\") " pod="openstack/neutron-7dc9f74cdf-w757n" Jan 31 04:07:23 crc kubenswrapper[4667]: I0131 04:07:23.114854 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0de14766-3b67-45ce-a8d8-276f90ce6310-internal-tls-certs\") pod \"neutron-7dc9f74cdf-w757n\" (UID: \"0de14766-3b67-45ce-a8d8-276f90ce6310\") " pod="openstack/neutron-7dc9f74cdf-w757n" Jan 31 04:07:23 crc kubenswrapper[4667]: I0131 04:07:23.114936 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0de14766-3b67-45ce-a8d8-276f90ce6310-public-tls-certs\") pod \"neutron-7dc9f74cdf-w757n\" (UID: \"0de14766-3b67-45ce-a8d8-276f90ce6310\") " pod="openstack/neutron-7dc9f74cdf-w757n" Jan 31 04:07:23 crc kubenswrapper[4667]: I0131 04:07:23.115024 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0de14766-3b67-45ce-a8d8-276f90ce6310-ovndb-tls-certs\") pod \"neutron-7dc9f74cdf-w757n\" (UID: \"0de14766-3b67-45ce-a8d8-276f90ce6310\") " pod="openstack/neutron-7dc9f74cdf-w757n" Jan 31 04:07:23 crc kubenswrapper[4667]: I0131 04:07:23.115122 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfq4t\" (UniqueName: \"kubernetes.io/projected/0de14766-3b67-45ce-a8d8-276f90ce6310-kube-api-access-tfq4t\") pod \"neutron-7dc9f74cdf-w757n\" (UID: \"0de14766-3b67-45ce-a8d8-276f90ce6310\") " pod="openstack/neutron-7dc9f74cdf-w757n" Jan 31 04:07:23 crc kubenswrapper[4667]: I0131 04:07:23.127347 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0de14766-3b67-45ce-a8d8-276f90ce6310-httpd-config\") pod \"neutron-7dc9f74cdf-w757n\" (UID: \"0de14766-3b67-45ce-a8d8-276f90ce6310\") " pod="openstack/neutron-7dc9f74cdf-w757n" Jan 31 04:07:23 crc kubenswrapper[4667]: W0131 04:07:23.222330 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43842154_1666_491b_b37a_061c1a7c2b90.slice/crio-d5792665b427db5423a942b5ae6e9824580cdc0be61cd232246d547cfa111570 WatchSource:0}: Error finding container d5792665b427db5423a942b5ae6e9824580cdc0be61cd232246d547cfa111570: Status 404 returned error can't find the container with id d5792665b427db5423a942b5ae6e9824580cdc0be61cd232246d547cfa111570 Jan 31 04:07:23 crc kubenswrapper[4667]: I0131 04:07:23.235464 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0de14766-3b67-45ce-a8d8-276f90ce6310-ovndb-tls-certs\") pod \"neutron-7dc9f74cdf-w757n\" (UID: \"0de14766-3b67-45ce-a8d8-276f90ce6310\") " pod="openstack/neutron-7dc9f74cdf-w757n" Jan 31 04:07:23 crc kubenswrapper[4667]: I0131 04:07:23.235562 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfq4t\" (UniqueName: \"kubernetes.io/projected/0de14766-3b67-45ce-a8d8-276f90ce6310-kube-api-access-tfq4t\") pod \"neutron-7dc9f74cdf-w757n\" (UID: \"0de14766-3b67-45ce-a8d8-276f90ce6310\") " pod="openstack/neutron-7dc9f74cdf-w757n" Jan 31 04:07:23 crc kubenswrapper[4667]: I0131 04:07:23.235667 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0de14766-3b67-45ce-a8d8-276f90ce6310-httpd-config\") pod \"neutron-7dc9f74cdf-w757n\" (UID: \"0de14766-3b67-45ce-a8d8-276f90ce6310\") " pod="openstack/neutron-7dc9f74cdf-w757n" Jan 31 04:07:23 crc kubenswrapper[4667]: I0131 04:07:23.235724 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0de14766-3b67-45ce-a8d8-276f90ce6310-combined-ca-bundle\") pod \"neutron-7dc9f74cdf-w757n\" (UID: \"0de14766-3b67-45ce-a8d8-276f90ce6310\") " pod="openstack/neutron-7dc9f74cdf-w757n" Jan 31 04:07:23 crc kubenswrapper[4667]: I0131 04:07:23.235764 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0de14766-3b67-45ce-a8d8-276f90ce6310-config\") pod \"neutron-7dc9f74cdf-w757n\" (UID: \"0de14766-3b67-45ce-a8d8-276f90ce6310\") " pod="openstack/neutron-7dc9f74cdf-w757n" Jan 31 04:07:23 crc kubenswrapper[4667]: I0131 04:07:23.235832 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0de14766-3b67-45ce-a8d8-276f90ce6310-internal-tls-certs\") pod \"neutron-7dc9f74cdf-w757n\" (UID: \"0de14766-3b67-45ce-a8d8-276f90ce6310\") " pod="openstack/neutron-7dc9f74cdf-w757n" Jan 31 04:07:23 crc kubenswrapper[4667]: I0131 04:07:23.235871 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0de14766-3b67-45ce-a8d8-276f90ce6310-public-tls-certs\") pod \"neutron-7dc9f74cdf-w757n\" (UID: \"0de14766-3b67-45ce-a8d8-276f90ce6310\") " pod="openstack/neutron-7dc9f74cdf-w757n" Jan 31 04:07:23 crc kubenswrapper[4667]: I0131 04:07:23.243819 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0de14766-3b67-45ce-a8d8-276f90ce6310-public-tls-certs\") pod \"neutron-7dc9f74cdf-w757n\" (UID: \"0de14766-3b67-45ce-a8d8-276f90ce6310\") " pod="openstack/neutron-7dc9f74cdf-w757n" Jan 31 04:07:23 crc kubenswrapper[4667]: I0131 04:07:23.244689 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0de14766-3b67-45ce-a8d8-276f90ce6310-config\") pod \"neutron-7dc9f74cdf-w757n\" (UID: \"0de14766-3b67-45ce-a8d8-276f90ce6310\") " pod="openstack/neutron-7dc9f74cdf-w757n" Jan 31 04:07:23 crc kubenswrapper[4667]: I0131 04:07:23.245120 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0de14766-3b67-45ce-a8d8-276f90ce6310-internal-tls-certs\") pod \"neutron-7dc9f74cdf-w757n\" (UID: \"0de14766-3b67-45ce-a8d8-276f90ce6310\") " pod="openstack/neutron-7dc9f74cdf-w757n" Jan 31 04:07:23 crc kubenswrapper[4667]: I0131 04:07:23.245555 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0de14766-3b67-45ce-a8d8-276f90ce6310-httpd-config\") pod \"neutron-7dc9f74cdf-w757n\" (UID: \"0de14766-3b67-45ce-a8d8-276f90ce6310\") " pod="openstack/neutron-7dc9f74cdf-w757n" Jan 31 04:07:23 crc kubenswrapper[4667]: I0131 04:07:23.249415 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0de14766-3b67-45ce-a8d8-276f90ce6310-combined-ca-bundle\") pod \"neutron-7dc9f74cdf-w757n\" (UID: \"0de14766-3b67-45ce-a8d8-276f90ce6310\") " pod="openstack/neutron-7dc9f74cdf-w757n" Jan 31 04:07:23 crc kubenswrapper[4667]: I0131 04:07:23.262030 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfq4t\" (UniqueName: \"kubernetes.io/projected/0de14766-3b67-45ce-a8d8-276f90ce6310-kube-api-access-tfq4t\") pod \"neutron-7dc9f74cdf-w757n\" (UID: \"0de14766-3b67-45ce-a8d8-276f90ce6310\") " pod="openstack/neutron-7dc9f74cdf-w757n" Jan 31 04:07:23 crc kubenswrapper[4667]: I0131 04:07:23.264128 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0de14766-3b67-45ce-a8d8-276f90ce6310-ovndb-tls-certs\") pod \"neutron-7dc9f74cdf-w757n\" (UID: \"0de14766-3b67-45ce-a8d8-276f90ce6310\") " pod="openstack/neutron-7dc9f74cdf-w757n" Jan 31 04:07:23 crc kubenswrapper[4667]: I0131 04:07:23.476191 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7dc9f74cdf-w757n" Jan 31 04:07:23 crc kubenswrapper[4667]: I0131 04:07:23.598948 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-w9cj2" event={"ID":"8d7dc1b5-7662-4687-964b-b3e21fce9e06","Type":"ContainerStarted","Data":"880da3e2ef396b2fc27ef70d4c80b64e4d0e98ac62c00ed10919b5350be3803a"} Jan 31 04:07:23 crc kubenswrapper[4667]: I0131 04:07:23.652471 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86c748c4d6-2grmh" event={"ID":"c6974567-3bea-447a-bb8b-ced22b6d34ce","Type":"ContainerStarted","Data":"3fa239e2b62f1e7aacddff89f2ed28a743b788c82b3a5252236ff48d58158880"} Jan 31 04:07:23 crc kubenswrapper[4667]: I0131 04:07:23.652956 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86c748c4d6-2grmh" event={"ID":"c6974567-3bea-447a-bb8b-ced22b6d34ce","Type":"ContainerStarted","Data":"6151564215c6b2258fce388b639a16766c3fdc95565ccb66a72e32d9d544fef2"} Jan 31 04:07:23 crc kubenswrapper[4667]: I0131 04:07:23.684827 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-w9cj2" podStartSLOduration=19.684800143 podStartE2EDuration="19.684800143s" podCreationTimestamp="2026-01-31 04:07:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:07:23.655001513 +0000 UTC m=+1167.171336812" watchObservedRunningTime="2026-01-31 04:07:23.684800143 +0000 UTC m=+1167.201135442" Jan 31 04:07:23 crc kubenswrapper[4667]: I0131 04:07:23.712293 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78789d8f44-5trmc" event={"ID":"b7f8fd18-06a0-432e-8c17-c9b432b6ca69","Type":"ContainerStarted","Data":"ee19d508369900e40e0fadf4e91e4fb079d0e2cfa86f8523c3b3d7785c2c9dab"} Jan 31 04:07:23 crc kubenswrapper[4667]: I0131 04:07:23.715489 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-86c748c4d6-2grmh" podStartSLOduration=32.120381256 podStartE2EDuration="32.715466196s" podCreationTimestamp="2026-01-31 04:06:51 +0000 UTC" firstStartedPulling="2026-01-31 04:07:21.603981801 +0000 UTC m=+1165.120317100" lastFinishedPulling="2026-01-31 04:07:22.199066741 +0000 UTC m=+1165.715402040" observedRunningTime="2026-01-31 04:07:23.705477891 +0000 UTC m=+1167.221813180" watchObservedRunningTime="2026-01-31 04:07:23.715466196 +0000 UTC m=+1167.231801495" Jan 31 04:07:23 crc kubenswrapper[4667]: I0131 04:07:23.730314 4667 generic.go:334] "Generic (PLEG): container finished" podID="23ada731-7288-4699-9ae2-d1bde47a02a2" containerID="f85a64badc06a14e431e6a5acecea616ab84f8d1c01b2bf0c7d62704c4ccfa84" exitCode=0 Jan 31 04:07:23 crc kubenswrapper[4667]: I0131 04:07:23.730426 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-c58f7" event={"ID":"23ada731-7288-4699-9ae2-d1bde47a02a2","Type":"ContainerDied","Data":"f85a64badc06a14e431e6a5acecea616ab84f8d1c01b2bf0c7d62704c4ccfa84"} Jan 31 04:07:23 crc kubenswrapper[4667]: I0131 04:07:23.766668 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-746c944c96-t4g84" event={"ID":"43842154-1666-491b-b37a-061c1a7c2b90","Type":"ContainerStarted","Data":"d5792665b427db5423a942b5ae6e9824580cdc0be61cd232246d547cfa111570"} Jan 31 04:07:24 crc kubenswrapper[4667]: I0131 04:07:24.321908 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7dc9f74cdf-w757n"] Jan 31 04:07:24 crc kubenswrapper[4667]: W0131 04:07:24.387278 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0de14766_3b67_45ce_a8d8_276f90ce6310.slice/crio-8bef120c683b565caf8d531bf458f0b78f43d9937fcef7710f6a86b2ad05d2e7 WatchSource:0}: Error finding container 8bef120c683b565caf8d531bf458f0b78f43d9937fcef7710f6a86b2ad05d2e7: Status 404 returned error can't find the container with id 8bef120c683b565caf8d531bf458f0b78f43d9937fcef7710f6a86b2ad05d2e7 Jan 31 04:07:24 crc kubenswrapper[4667]: I0131 04:07:24.796075 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78789d8f44-5trmc" event={"ID":"b7f8fd18-06a0-432e-8c17-c9b432b6ca69","Type":"ContainerStarted","Data":"75959a94e1776a7025f344a57c090542bf63fb0615110c632e65e3a8c9188b18"} Jan 31 04:07:24 crc kubenswrapper[4667]: I0131 04:07:24.805486 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-c58f7" event={"ID":"23ada731-7288-4699-9ae2-d1bde47a02a2","Type":"ContainerStarted","Data":"04031cafab9c8ee2081c1d44fa5555b5c1ede2c62553aa59a7e3863f5e2cb39e"} Jan 31 04:07:24 crc kubenswrapper[4667]: I0131 04:07:24.806610 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-c58f7" Jan 31 04:07:24 crc kubenswrapper[4667]: I0131 04:07:24.816110 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-746c944c96-t4g84" event={"ID":"43842154-1666-491b-b37a-061c1a7c2b90","Type":"ContainerStarted","Data":"e75110d56dd05c0efedb08ebd8d37f857b44ab009489c7d041c85b970d5349c4"} Jan 31 04:07:24 crc kubenswrapper[4667]: I0131 04:07:24.816171 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-746c944c96-t4g84" event={"ID":"43842154-1666-491b-b37a-061c1a7c2b90","Type":"ContainerStarted","Data":"15267c0b3933699e7d571eb96c51d917a3ecf248dccdd25d9641dad4133590cc"} Jan 31 04:07:24 crc kubenswrapper[4667]: I0131 04:07:24.816302 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-746c944c96-t4g84" Jan 31 04:07:24 crc kubenswrapper[4667]: I0131 04:07:24.834202 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7dc9f74cdf-w757n" event={"ID":"0de14766-3b67-45ce-a8d8-276f90ce6310","Type":"ContainerStarted","Data":"8bef120c683b565caf8d531bf458f0b78f43d9937fcef7710f6a86b2ad05d2e7"} Jan 31 04:07:24 crc kubenswrapper[4667]: I0131 04:07:24.853690 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4128ea2d-f529-4224-a008-560c8920dc8f","Type":"ContainerStarted","Data":"7e38a694cb59c8a8c538783300ba6051eef6525a53c82c8b59aa44c4a104d40b"} Jan 31 04:07:24 crc kubenswrapper[4667]: I0131 04:07:24.873620 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-c58f7" podStartSLOduration=4.873595754 podStartE2EDuration="4.873595754s" podCreationTimestamp="2026-01-31 04:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:07:24.872941317 +0000 UTC m=+1168.389276616" watchObservedRunningTime="2026-01-31 04:07:24.873595754 +0000 UTC m=+1168.389931043" Jan 31 04:07:24 crc kubenswrapper[4667]: I0131 04:07:24.876438 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-78789d8f44-5trmc" podStartSLOduration=33.311543889 podStartE2EDuration="33.876430409s" podCreationTimestamp="2026-01-31 04:06:51 +0000 UTC" firstStartedPulling="2026-01-31 04:07:21.635013243 +0000 UTC m=+1165.151348542" lastFinishedPulling="2026-01-31 04:07:22.199899763 +0000 UTC m=+1165.716235062" observedRunningTime="2026-01-31 04:07:24.830446981 +0000 UTC m=+1168.346782270" watchObservedRunningTime="2026-01-31 04:07:24.876430409 +0000 UTC m=+1168.392765708" Jan 31 04:07:24 crc kubenswrapper[4667]: I0131 04:07:24.883807 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"75c7336f-29b1-4a8a-88c1-69eec14a92b7","Type":"ContainerStarted","Data":"6580942cad36b126b755e3abd72d1cf44bec2102fa46a2dfdefa48d3b287cb12"} Jan 31 04:07:24 crc kubenswrapper[4667]: I0131 04:07:24.922350 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-746c944c96-t4g84" podStartSLOduration=4.922323426 podStartE2EDuration="4.922323426s" podCreationTimestamp="2026-01-31 04:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:07:24.905776087 +0000 UTC m=+1168.422111386" watchObservedRunningTime="2026-01-31 04:07:24.922323426 +0000 UTC m=+1168.438658725" Jan 31 04:07:25 crc kubenswrapper[4667]: I0131 04:07:25.897036 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7dc9f74cdf-w757n" event={"ID":"0de14766-3b67-45ce-a8d8-276f90ce6310","Type":"ContainerStarted","Data":"4744d93f757062e772e2bad13de89f14714f329c6557f80834045603808a0be2"} Jan 31 04:07:25 crc kubenswrapper[4667]: I0131 04:07:25.897558 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7dc9f74cdf-w757n" event={"ID":"0de14766-3b67-45ce-a8d8-276f90ce6310","Type":"ContainerStarted","Data":"d945eb277af255971ce21f9fbe29ebc3a76c0875c97b71da5a95149ba1c61844"} Jan 31 04:07:25 crc kubenswrapper[4667]: I0131 04:07:25.899323 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7dc9f74cdf-w757n" Jan 31 04:07:25 crc kubenswrapper[4667]: I0131 04:07:25.909433 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4128ea2d-f529-4224-a008-560c8920dc8f","Type":"ContainerStarted","Data":"b7ba3b069b9a1e0ee79306c615d0a44bc6b2c2e42cc95b001530e969cd870866"} Jan 31 04:07:25 crc kubenswrapper[4667]: I0131 04:07:25.912159 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"75c7336f-29b1-4a8a-88c1-69eec14a92b7","Type":"ContainerStarted","Data":"dcf86acf1b087b1327e20c15a2391601c9392b1f310f6ca59b658c3320521994"} Jan 31 04:07:25 crc kubenswrapper[4667]: I0131 04:07:25.923378 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-746c944c96-t4g84_43842154-1666-491b-b37a-061c1a7c2b90/neutron-httpd/0.log" Jan 31 04:07:25 crc kubenswrapper[4667]: I0131 04:07:25.924755 4667 generic.go:334] "Generic (PLEG): container finished" podID="43842154-1666-491b-b37a-061c1a7c2b90" containerID="e75110d56dd05c0efedb08ebd8d37f857b44ab009489c7d041c85b970d5349c4" exitCode=1 Jan 31 04:07:25 crc kubenswrapper[4667]: I0131 04:07:25.925859 4667 scope.go:117] "RemoveContainer" containerID="e75110d56dd05c0efedb08ebd8d37f857b44ab009489c7d041c85b970d5349c4" Jan 31 04:07:25 crc kubenswrapper[4667]: I0131 04:07:25.926194 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-746c944c96-t4g84" event={"ID":"43842154-1666-491b-b37a-061c1a7c2b90","Type":"ContainerDied","Data":"e75110d56dd05c0efedb08ebd8d37f857b44ab009489c7d041c85b970d5349c4"} Jan 31 04:07:25 crc kubenswrapper[4667]: I0131 04:07:25.935904 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7dc9f74cdf-w757n" podStartSLOduration=2.935880424 podStartE2EDuration="2.935880424s" podCreationTimestamp="2026-01-31 04:07:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:07:25.927053941 +0000 UTC m=+1169.443389240" watchObservedRunningTime="2026-01-31 04:07:25.935880424 +0000 UTC m=+1169.452215723" Jan 31 04:07:25 crc kubenswrapper[4667]: I0131 04:07:25.965007 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=21.962937781 podStartE2EDuration="21.962937781s" podCreationTimestamp="2026-01-31 04:07:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:07:25.95420899 +0000 UTC m=+1169.470544289" watchObservedRunningTime="2026-01-31 04:07:25.962937781 +0000 UTC m=+1169.479273080" Jan 31 04:07:26 crc kubenswrapper[4667]: I0131 04:07:26.033586 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=22.033557853 podStartE2EDuration="22.033557853s" podCreationTimestamp="2026-01-31 04:07:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:07:26.02173886 +0000 UTC m=+1169.538074159" watchObservedRunningTime="2026-01-31 04:07:26.033557853 +0000 UTC m=+1169.549893152" Jan 31 04:07:26 crc kubenswrapper[4667]: I0131 04:07:26.973978 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-746c944c96-t4g84_43842154-1666-491b-b37a-061c1a7c2b90/neutron-httpd/0.log" Jan 31 04:07:26 crc kubenswrapper[4667]: I0131 04:07:26.975524 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-746c944c96-t4g84" event={"ID":"43842154-1666-491b-b37a-061c1a7c2b90","Type":"ContainerStarted","Data":"87736fd513713fd560a94d7e15278e3343582af16953e4d734cf6cb3d54b555e"} Jan 31 04:07:28 crc kubenswrapper[4667]: I0131 04:07:28.013011 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-746c944c96-t4g84_43842154-1666-491b-b37a-061c1a7c2b90/neutron-httpd/1.log" Jan 31 04:07:28 crc kubenswrapper[4667]: I0131 04:07:28.015496 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-746c944c96-t4g84_43842154-1666-491b-b37a-061c1a7c2b90/neutron-httpd/0.log" Jan 31 04:07:28 crc kubenswrapper[4667]: I0131 04:07:28.015994 4667 generic.go:334] "Generic (PLEG): container finished" podID="43842154-1666-491b-b37a-061c1a7c2b90" containerID="87736fd513713fd560a94d7e15278e3343582af16953e4d734cf6cb3d54b555e" exitCode=1 Jan 31 04:07:28 crc kubenswrapper[4667]: I0131 04:07:28.016153 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-746c944c96-t4g84" event={"ID":"43842154-1666-491b-b37a-061c1a7c2b90","Type":"ContainerDied","Data":"87736fd513713fd560a94d7e15278e3343582af16953e4d734cf6cb3d54b555e"} Jan 31 04:07:28 crc kubenswrapper[4667]: I0131 04:07:28.016272 4667 scope.go:117] "RemoveContainer" containerID="e75110d56dd05c0efedb08ebd8d37f857b44ab009489c7d041c85b970d5349c4" Jan 31 04:07:28 crc kubenswrapper[4667]: I0131 04:07:28.017244 4667 scope.go:117] "RemoveContainer" containerID="87736fd513713fd560a94d7e15278e3343582af16953e4d734cf6cb3d54b555e" Jan 31 04:07:28 crc kubenswrapper[4667]: E0131 04:07:28.017592 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 10s restarting failed container=neutron-httpd pod=neutron-746c944c96-t4g84_openstack(43842154-1666-491b-b37a-061c1a7c2b90)\"" pod="openstack/neutron-746c944c96-t4g84" podUID="43842154-1666-491b-b37a-061c1a7c2b90" Jan 31 04:07:29 crc kubenswrapper[4667]: I0131 04:07:29.028364 4667 generic.go:334] "Generic (PLEG): container finished" podID="6e23be1c-6ab2-442e-b12e-e4083c274a67" containerID="a2ba612c47c6a1009fc72ff61e30d9cf1ee4813472358fc9f7e831e6c727188b" exitCode=0 Jan 31 04:07:29 crc kubenswrapper[4667]: I0131 04:07:29.028452 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-mkdm4" event={"ID":"6e23be1c-6ab2-442e-b12e-e4083c274a67","Type":"ContainerDied","Data":"a2ba612c47c6a1009fc72ff61e30d9cf1ee4813472358fc9f7e831e6c727188b"} Jan 31 04:07:29 crc kubenswrapper[4667]: I0131 04:07:29.029909 4667 scope.go:117] "RemoveContainer" containerID="87736fd513713fd560a94d7e15278e3343582af16953e4d734cf6cb3d54b555e" Jan 31 04:07:29 crc kubenswrapper[4667]: E0131 04:07:29.030284 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 10s restarting failed container=neutron-httpd pod=neutron-746c944c96-t4g84_openstack(43842154-1666-491b-b37a-061c1a7c2b90)\"" pod="openstack/neutron-746c944c96-t4g84" podUID="43842154-1666-491b-b37a-061c1a7c2b90" Jan 31 04:07:30 crc kubenswrapper[4667]: I0131 04:07:30.369726 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-c58f7" Jan 31 04:07:30 crc kubenswrapper[4667]: I0131 04:07:30.464506 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-gzgp6"] Jan 31 04:07:30 crc kubenswrapper[4667]: I0131 04:07:30.465279 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-gzgp6" podUID="a0e678e9-3d71-4c27-a179-f4fdd1515701" containerName="dnsmasq-dns" containerID="cri-o://cbd292d4994e7bb4f762b3480154df9034043365069ac633f8390e73438bd433" gracePeriod=10 Jan 31 04:07:30 crc kubenswrapper[4667]: I0131 04:07:30.846374 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-gzgp6" podUID="a0e678e9-3d71-4c27-a179-f4fdd1515701" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.147:5353: connect: connection refused" Jan 31 04:07:31 crc kubenswrapper[4667]: I0131 04:07:31.052602 4667 generic.go:334] "Generic (PLEG): container finished" podID="a0e678e9-3d71-4c27-a179-f4fdd1515701" containerID="cbd292d4994e7bb4f762b3480154df9034043365069ac633f8390e73438bd433" exitCode=0 Jan 31 04:07:31 crc kubenswrapper[4667]: I0131 04:07:31.052687 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-gzgp6" event={"ID":"a0e678e9-3d71-4c27-a179-f4fdd1515701","Type":"ContainerDied","Data":"cbd292d4994e7bb4f762b3480154df9034043365069ac633f8390e73438bd433"} Jan 31 04:07:31 crc kubenswrapper[4667]: I0131 04:07:31.054430 4667 generic.go:334] "Generic (PLEG): container finished" podID="8d7dc1b5-7662-4687-964b-b3e21fce9e06" containerID="880da3e2ef396b2fc27ef70d4c80b64e4d0e98ac62c00ed10919b5350be3803a" exitCode=0 Jan 31 04:07:31 crc kubenswrapper[4667]: I0131 04:07:31.054486 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-w9cj2" event={"ID":"8d7dc1b5-7662-4687-964b-b3e21fce9e06","Type":"ContainerDied","Data":"880da3e2ef396b2fc27ef70d4c80b64e4d0e98ac62c00ed10919b5350be3803a"} Jan 31 04:07:31 crc kubenswrapper[4667]: I0131 04:07:31.754768 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-78789d8f44-5trmc" Jan 31 04:07:31 crc kubenswrapper[4667]: I0131 04:07:31.756040 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-78789d8f44-5trmc" Jan 31 04:07:31 crc kubenswrapper[4667]: I0131 04:07:31.843389 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-86c748c4d6-2grmh" Jan 31 04:07:31 crc kubenswrapper[4667]: I0131 04:07:31.843482 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-86c748c4d6-2grmh" Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.096597 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-w9cj2" event={"ID":"8d7dc1b5-7662-4687-964b-b3e21fce9e06","Type":"ContainerDied","Data":"91f50a9c2580148fd54f2e67117a35d9e045cbb9f43ff636b97ffca306a85772"} Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.097220 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91f50a9c2580148fd54f2e67117a35d9e045cbb9f43ff636b97ffca306a85772" Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.105438 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-mkdm4" event={"ID":"6e23be1c-6ab2-442e-b12e-e4083c274a67","Type":"ContainerDied","Data":"5fca837a3b11257d244086a66e6c594919dfc96073c003b183444fc3f6002205"} Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.105513 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5fca837a3b11257d244086a66e6c594919dfc96073c003b183444fc3f6002205" Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.124301 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-mkdm4" Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.147155 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-w9cj2" Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.241146 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8d7dc1b5-7662-4687-964b-b3e21fce9e06-credential-keys\") pod \"8d7dc1b5-7662-4687-964b-b3e21fce9e06\" (UID: \"8d7dc1b5-7662-4687-964b-b3e21fce9e06\") " Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.241215 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e23be1c-6ab2-442e-b12e-e4083c274a67-config-data\") pod \"6e23be1c-6ab2-442e-b12e-e4083c274a67\" (UID: \"6e23be1c-6ab2-442e-b12e-e4083c274a67\") " Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.241271 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xv7r6\" (UniqueName: \"kubernetes.io/projected/6e23be1c-6ab2-442e-b12e-e4083c274a67-kube-api-access-xv7r6\") pod \"6e23be1c-6ab2-442e-b12e-e4083c274a67\" (UID: \"6e23be1c-6ab2-442e-b12e-e4083c274a67\") " Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.241341 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e23be1c-6ab2-442e-b12e-e4083c274a67-combined-ca-bundle\") pod \"6e23be1c-6ab2-442e-b12e-e4083c274a67\" (UID: \"6e23be1c-6ab2-442e-b12e-e4083c274a67\") " Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.241381 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d7dc1b5-7662-4687-964b-b3e21fce9e06-combined-ca-bundle\") pod \"8d7dc1b5-7662-4687-964b-b3e21fce9e06\" (UID: \"8d7dc1b5-7662-4687-964b-b3e21fce9e06\") " Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.241411 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e23be1c-6ab2-442e-b12e-e4083c274a67-scripts\") pod \"6e23be1c-6ab2-442e-b12e-e4083c274a67\" (UID: \"6e23be1c-6ab2-442e-b12e-e4083c274a67\") " Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.241457 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d7dc1b5-7662-4687-964b-b3e21fce9e06-scripts\") pod \"8d7dc1b5-7662-4687-964b-b3e21fce9e06\" (UID: \"8d7dc1b5-7662-4687-964b-b3e21fce9e06\") " Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.241535 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4c2fc\" (UniqueName: \"kubernetes.io/projected/8d7dc1b5-7662-4687-964b-b3e21fce9e06-kube-api-access-4c2fc\") pod \"8d7dc1b5-7662-4687-964b-b3e21fce9e06\" (UID: \"8d7dc1b5-7662-4687-964b-b3e21fce9e06\") " Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.241556 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d7dc1b5-7662-4687-964b-b3e21fce9e06-config-data\") pod \"8d7dc1b5-7662-4687-964b-b3e21fce9e06\" (UID: \"8d7dc1b5-7662-4687-964b-b3e21fce9e06\") " Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.241580 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e23be1c-6ab2-442e-b12e-e4083c274a67-logs\") pod \"6e23be1c-6ab2-442e-b12e-e4083c274a67\" (UID: \"6e23be1c-6ab2-442e-b12e-e4083c274a67\") " Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.241610 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8d7dc1b5-7662-4687-964b-b3e21fce9e06-fernet-keys\") pod \"8d7dc1b5-7662-4687-964b-b3e21fce9e06\" (UID: \"8d7dc1b5-7662-4687-964b-b3e21fce9e06\") " Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.253061 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d7dc1b5-7662-4687-964b-b3e21fce9e06-kube-api-access-4c2fc" (OuterVolumeSpecName: "kube-api-access-4c2fc") pod "8d7dc1b5-7662-4687-964b-b3e21fce9e06" (UID: "8d7dc1b5-7662-4687-964b-b3e21fce9e06"). InnerVolumeSpecName "kube-api-access-4c2fc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.255804 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d7dc1b5-7662-4687-964b-b3e21fce9e06-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "8d7dc1b5-7662-4687-964b-b3e21fce9e06" (UID: "8d7dc1b5-7662-4687-964b-b3e21fce9e06"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.258589 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e23be1c-6ab2-442e-b12e-e4083c274a67-logs" (OuterVolumeSpecName: "logs") pod "6e23be1c-6ab2-442e-b12e-e4083c274a67" (UID: "6e23be1c-6ab2-442e-b12e-e4083c274a67"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.276547 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e23be1c-6ab2-442e-b12e-e4083c274a67-scripts" (OuterVolumeSpecName: "scripts") pod "6e23be1c-6ab2-442e-b12e-e4083c274a67" (UID: "6e23be1c-6ab2-442e-b12e-e4083c274a67"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.276523 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d7dc1b5-7662-4687-964b-b3e21fce9e06-scripts" (OuterVolumeSpecName: "scripts") pod "8d7dc1b5-7662-4687-964b-b3e21fce9e06" (UID: "8d7dc1b5-7662-4687-964b-b3e21fce9e06"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.276547 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e23be1c-6ab2-442e-b12e-e4083c274a67-kube-api-access-xv7r6" (OuterVolumeSpecName: "kube-api-access-xv7r6") pod "6e23be1c-6ab2-442e-b12e-e4083c274a67" (UID: "6e23be1c-6ab2-442e-b12e-e4083c274a67"). InnerVolumeSpecName "kube-api-access-xv7r6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.305736 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e23be1c-6ab2-442e-b12e-e4083c274a67-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e23be1c-6ab2-442e-b12e-e4083c274a67" (UID: "6e23be1c-6ab2-442e-b12e-e4083c274a67"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.315619 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d7dc1b5-7662-4687-964b-b3e21fce9e06-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8d7dc1b5-7662-4687-964b-b3e21fce9e06" (UID: "8d7dc1b5-7662-4687-964b-b3e21fce9e06"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.317599 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d7dc1b5-7662-4687-964b-b3e21fce9e06-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d7dc1b5-7662-4687-964b-b3e21fce9e06" (UID: "8d7dc1b5-7662-4687-964b-b3e21fce9e06"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.321370 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d7dc1b5-7662-4687-964b-b3e21fce9e06-config-data" (OuterVolumeSpecName: "config-data") pod "8d7dc1b5-7662-4687-964b-b3e21fce9e06" (UID: "8d7dc1b5-7662-4687-964b-b3e21fce9e06"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.346306 4667 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d7dc1b5-7662-4687-964b-b3e21fce9e06-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.346348 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4c2fc\" (UniqueName: \"kubernetes.io/projected/8d7dc1b5-7662-4687-964b-b3e21fce9e06-kube-api-access-4c2fc\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.346364 4667 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e23be1c-6ab2-442e-b12e-e4083c274a67-logs\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.346373 4667 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8d7dc1b5-7662-4687-964b-b3e21fce9e06-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.346383 4667 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8d7dc1b5-7662-4687-964b-b3e21fce9e06-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.346396 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xv7r6\" (UniqueName: \"kubernetes.io/projected/6e23be1c-6ab2-442e-b12e-e4083c274a67-kube-api-access-xv7r6\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.346407 4667 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e23be1c-6ab2-442e-b12e-e4083c274a67-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.346419 4667 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d7dc1b5-7662-4687-964b-b3e21fce9e06-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.346428 4667 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e23be1c-6ab2-442e-b12e-e4083c274a67-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.346438 4667 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d7dc1b5-7662-4687-964b-b3e21fce9e06-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.430916 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e23be1c-6ab2-442e-b12e-e4083c274a67-config-data" (OuterVolumeSpecName: "config-data") pod "6e23be1c-6ab2-442e-b12e-e4083c274a67" (UID: "6e23be1c-6ab2-442e-b12e-e4083c274a67"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.448779 4667 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e23be1c-6ab2-442e-b12e-e4083c274a67-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.477057 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-gzgp6" Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.550893 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a0e678e9-3d71-4c27-a179-f4fdd1515701-ovsdbserver-nb\") pod \"a0e678e9-3d71-4c27-a179-f4fdd1515701\" (UID: \"a0e678e9-3d71-4c27-a179-f4fdd1515701\") " Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.550980 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0e678e9-3d71-4c27-a179-f4fdd1515701-config\") pod \"a0e678e9-3d71-4c27-a179-f4fdd1515701\" (UID: \"a0e678e9-3d71-4c27-a179-f4fdd1515701\") " Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.551099 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a0e678e9-3d71-4c27-a179-f4fdd1515701-ovsdbserver-sb\") pod \"a0e678e9-3d71-4c27-a179-f4fdd1515701\" (UID: \"a0e678e9-3d71-4c27-a179-f4fdd1515701\") " Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.551159 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6ps4\" (UniqueName: \"kubernetes.io/projected/a0e678e9-3d71-4c27-a179-f4fdd1515701-kube-api-access-x6ps4\") pod \"a0e678e9-3d71-4c27-a179-f4fdd1515701\" (UID: \"a0e678e9-3d71-4c27-a179-f4fdd1515701\") " Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.551203 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a0e678e9-3d71-4c27-a179-f4fdd1515701-dns-swift-storage-0\") pod \"a0e678e9-3d71-4c27-a179-f4fdd1515701\" (UID: \"a0e678e9-3d71-4c27-a179-f4fdd1515701\") " Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.551238 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0e678e9-3d71-4c27-a179-f4fdd1515701-dns-svc\") pod \"a0e678e9-3d71-4c27-a179-f4fdd1515701\" (UID: \"a0e678e9-3d71-4c27-a179-f4fdd1515701\") " Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.579015 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0e678e9-3d71-4c27-a179-f4fdd1515701-kube-api-access-x6ps4" (OuterVolumeSpecName: "kube-api-access-x6ps4") pod "a0e678e9-3d71-4c27-a179-f4fdd1515701" (UID: "a0e678e9-3d71-4c27-a179-f4fdd1515701"). InnerVolumeSpecName "kube-api-access-x6ps4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.655108 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6ps4\" (UniqueName: \"kubernetes.io/projected/a0e678e9-3d71-4c27-a179-f4fdd1515701-kube-api-access-x6ps4\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.666909 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0e678e9-3d71-4c27-a179-f4fdd1515701-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a0e678e9-3d71-4c27-a179-f4fdd1515701" (UID: "a0e678e9-3d71-4c27-a179-f4fdd1515701"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.676335 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0e678e9-3d71-4c27-a179-f4fdd1515701-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a0e678e9-3d71-4c27-a179-f4fdd1515701" (UID: "a0e678e9-3d71-4c27-a179-f4fdd1515701"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.691499 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0e678e9-3d71-4c27-a179-f4fdd1515701-config" (OuterVolumeSpecName: "config") pod "a0e678e9-3d71-4c27-a179-f4fdd1515701" (UID: "a0e678e9-3d71-4c27-a179-f4fdd1515701"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.692687 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0e678e9-3d71-4c27-a179-f4fdd1515701-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a0e678e9-3d71-4c27-a179-f4fdd1515701" (UID: "a0e678e9-3d71-4c27-a179-f4fdd1515701"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.700354 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0e678e9-3d71-4c27-a179-f4fdd1515701-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a0e678e9-3d71-4c27-a179-f4fdd1515701" (UID: "a0e678e9-3d71-4c27-a179-f4fdd1515701"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.757057 4667 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a0e678e9-3d71-4c27-a179-f4fdd1515701-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.757099 4667 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a0e678e9-3d71-4c27-a179-f4fdd1515701-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.757111 4667 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0e678e9-3d71-4c27-a179-f4fdd1515701-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.757125 4667 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a0e678e9-3d71-4c27-a179-f4fdd1515701-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:33 crc kubenswrapper[4667]: I0131 04:07:33.757135 4667 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0e678e9-3d71-4c27-a179-f4fdd1515701-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.118584 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-gzgp6" event={"ID":"a0e678e9-3d71-4c27-a179-f4fdd1515701","Type":"ContainerDied","Data":"4be911f990fe2db6f473ef6b343a6f9c56c8b730a803a8d290f6b3a4b72c0ba8"} Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.118665 4667 scope.go:117] "RemoveContainer" containerID="cbd292d4994e7bb4f762b3480154df9034043365069ac633f8390e73438bd433" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.118701 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-gzgp6" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.122737 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-746c944c96-t4g84_43842154-1666-491b-b37a-061c1a7c2b90/neutron-httpd/1.log" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.136287 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4nj2p" event={"ID":"7b6bac61-1103-438b-9e75-f3d6b6902270","Type":"ContainerStarted","Data":"0b580b6dd82d2193fa73ab8fbf259431448619ef80802ef62c949f8363ee652d"} Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.148997 4667 scope.go:117] "RemoveContainer" containerID="fdd1cd838e537db8ae2d9de10dfce44473176f3f68d9b971afbbf2fbcb3b8c56" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.149272 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-mkdm4" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.149986 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-w9cj2" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.163669 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c175848a-4645-42e7-8ccc-ab873e1ff7aa","Type":"ContainerStarted","Data":"3f956df323dcf5ea513fbf4fca63c5b6c48b46d1d34cec9e0da1d18d570c1f71"} Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.168287 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-4nj2p" podStartSLOduration=3.578921262 podStartE2EDuration="55.168257436s" podCreationTimestamp="2026-01-31 04:06:39 +0000 UTC" firstStartedPulling="2026-01-31 04:06:41.610497649 +0000 UTC m=+1125.126832948" lastFinishedPulling="2026-01-31 04:07:33.199833823 +0000 UTC m=+1176.716169122" observedRunningTime="2026-01-31 04:07:34.166353346 +0000 UTC m=+1177.682688645" watchObservedRunningTime="2026-01-31 04:07:34.168257436 +0000 UTC m=+1177.684592735" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.224505 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-gzgp6"] Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.239223 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-gzgp6"] Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.322490 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7f96457d78-wrrfr"] Jan 31 04:07:34 crc kubenswrapper[4667]: E0131 04:07:34.322918 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d7dc1b5-7662-4687-964b-b3e21fce9e06" containerName="keystone-bootstrap" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.322943 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d7dc1b5-7662-4687-964b-b3e21fce9e06" containerName="keystone-bootstrap" Jan 31 04:07:34 crc kubenswrapper[4667]: E0131 04:07:34.322956 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e23be1c-6ab2-442e-b12e-e4083c274a67" containerName="placement-db-sync" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.322963 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e23be1c-6ab2-442e-b12e-e4083c274a67" containerName="placement-db-sync" Jan 31 04:07:34 crc kubenswrapper[4667]: E0131 04:07:34.322969 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0e678e9-3d71-4c27-a179-f4fdd1515701" containerName="init" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.322975 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0e678e9-3d71-4c27-a179-f4fdd1515701" containerName="init" Jan 31 04:07:34 crc kubenswrapper[4667]: E0131 04:07:34.322989 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0e678e9-3d71-4c27-a179-f4fdd1515701" containerName="dnsmasq-dns" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.322998 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0e678e9-3d71-4c27-a179-f4fdd1515701" containerName="dnsmasq-dns" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.323195 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d7dc1b5-7662-4687-964b-b3e21fce9e06" containerName="keystone-bootstrap" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.323245 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0e678e9-3d71-4c27-a179-f4fdd1515701" containerName="dnsmasq-dns" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.323265 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e23be1c-6ab2-442e-b12e-e4083c274a67" containerName="placement-db-sync" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.338151 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7f96457d78-wrrfr" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.356498 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.356988 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.357207 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-vttwk" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.357381 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.357541 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.428728 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7f96457d78-wrrfr"] Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.469899 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5558665b54-mq2t5"] Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.471280 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5558665b54-mq2t5" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.478514 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.478696 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.478800 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.478864 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-rw7d7" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.478986 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.479090 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.479253 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55398def-7876-49e4-9509-29374a5f9321-internal-tls-certs\") pod \"placement-7f96457d78-wrrfr\" (UID: \"55398def-7876-49e4-9509-29374a5f9321\") " pod="openstack/placement-7f96457d78-wrrfr" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.479369 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55398def-7876-49e4-9509-29374a5f9321-scripts\") pod \"placement-7f96457d78-wrrfr\" (UID: \"55398def-7876-49e4-9509-29374a5f9321\") " pod="openstack/placement-7f96457d78-wrrfr" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.479410 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55398def-7876-49e4-9509-29374a5f9321-public-tls-certs\") pod \"placement-7f96457d78-wrrfr\" (UID: \"55398def-7876-49e4-9509-29374a5f9321\") " pod="openstack/placement-7f96457d78-wrrfr" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.479450 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55398def-7876-49e4-9509-29374a5f9321-config-data\") pod \"placement-7f96457d78-wrrfr\" (UID: \"55398def-7876-49e4-9509-29374a5f9321\") " pod="openstack/placement-7f96457d78-wrrfr" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.479526 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsd52\" (UniqueName: \"kubernetes.io/projected/55398def-7876-49e4-9509-29374a5f9321-kube-api-access-bsd52\") pod \"placement-7f96457d78-wrrfr\" (UID: \"55398def-7876-49e4-9509-29374a5f9321\") " pod="openstack/placement-7f96457d78-wrrfr" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.479564 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55398def-7876-49e4-9509-29374a5f9321-combined-ca-bundle\") pod \"placement-7f96457d78-wrrfr\" (UID: \"55398def-7876-49e4-9509-29374a5f9321\") " pod="openstack/placement-7f96457d78-wrrfr" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.479608 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55398def-7876-49e4-9509-29374a5f9321-logs\") pod \"placement-7f96457d78-wrrfr\" (UID: \"55398def-7876-49e4-9509-29374a5f9321\") " pod="openstack/placement-7f96457d78-wrrfr" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.503428 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5558665b54-mq2t5"] Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.506115 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.507996 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.508048 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.508119 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.563752 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.582209 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.585286 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2cf275de-3442-4fe5-ab8b-a4796c0bc829-credential-keys\") pod \"keystone-5558665b54-mq2t5\" (UID: \"2cf275de-3442-4fe5-ab8b-a4796c0bc829\") " pod="openstack/keystone-5558665b54-mq2t5" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.585428 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsd52\" (UniqueName: \"kubernetes.io/projected/55398def-7876-49e4-9509-29374a5f9321-kube-api-access-bsd52\") pod \"placement-7f96457d78-wrrfr\" (UID: \"55398def-7876-49e4-9509-29374a5f9321\") " pod="openstack/placement-7f96457d78-wrrfr" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.585471 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cf275de-3442-4fe5-ab8b-a4796c0bc829-scripts\") pod \"keystone-5558665b54-mq2t5\" (UID: \"2cf275de-3442-4fe5-ab8b-a4796c0bc829\") " pod="openstack/keystone-5558665b54-mq2t5" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.585502 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cf275de-3442-4fe5-ab8b-a4796c0bc829-config-data\") pod \"keystone-5558665b54-mq2t5\" (UID: \"2cf275de-3442-4fe5-ab8b-a4796c0bc829\") " pod="openstack/keystone-5558665b54-mq2t5" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.585549 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55398def-7876-49e4-9509-29374a5f9321-combined-ca-bundle\") pod \"placement-7f96457d78-wrrfr\" (UID: \"55398def-7876-49e4-9509-29374a5f9321\") " pod="openstack/placement-7f96457d78-wrrfr" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.585604 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55398def-7876-49e4-9509-29374a5f9321-logs\") pod \"placement-7f96457d78-wrrfr\" (UID: \"55398def-7876-49e4-9509-29374a5f9321\") " pod="openstack/placement-7f96457d78-wrrfr" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.585700 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfvwc\" (UniqueName: \"kubernetes.io/projected/2cf275de-3442-4fe5-ab8b-a4796c0bc829-kube-api-access-mfvwc\") pod \"keystone-5558665b54-mq2t5\" (UID: \"2cf275de-3442-4fe5-ab8b-a4796c0bc829\") " pod="openstack/keystone-5558665b54-mq2t5" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.585769 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cf275de-3442-4fe5-ab8b-a4796c0bc829-public-tls-certs\") pod \"keystone-5558665b54-mq2t5\" (UID: \"2cf275de-3442-4fe5-ab8b-a4796c0bc829\") " pod="openstack/keystone-5558665b54-mq2t5" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.585822 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cf275de-3442-4fe5-ab8b-a4796c0bc829-combined-ca-bundle\") pod \"keystone-5558665b54-mq2t5\" (UID: \"2cf275de-3442-4fe5-ab8b-a4796c0bc829\") " pod="openstack/keystone-5558665b54-mq2t5" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.585934 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cf275de-3442-4fe5-ab8b-a4796c0bc829-internal-tls-certs\") pod \"keystone-5558665b54-mq2t5\" (UID: \"2cf275de-3442-4fe5-ab8b-a4796c0bc829\") " pod="openstack/keystone-5558665b54-mq2t5" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.585999 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55398def-7876-49e4-9509-29374a5f9321-internal-tls-certs\") pod \"placement-7f96457d78-wrrfr\" (UID: \"55398def-7876-49e4-9509-29374a5f9321\") " pod="openstack/placement-7f96457d78-wrrfr" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.586048 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2cf275de-3442-4fe5-ab8b-a4796c0bc829-fernet-keys\") pod \"keystone-5558665b54-mq2t5\" (UID: \"2cf275de-3442-4fe5-ab8b-a4796c0bc829\") " pod="openstack/keystone-5558665b54-mq2t5" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.586108 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55398def-7876-49e4-9509-29374a5f9321-scripts\") pod \"placement-7f96457d78-wrrfr\" (UID: \"55398def-7876-49e4-9509-29374a5f9321\") " pod="openstack/placement-7f96457d78-wrrfr" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.586142 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55398def-7876-49e4-9509-29374a5f9321-public-tls-certs\") pod \"placement-7f96457d78-wrrfr\" (UID: \"55398def-7876-49e4-9509-29374a5f9321\") " pod="openstack/placement-7f96457d78-wrrfr" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.586201 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55398def-7876-49e4-9509-29374a5f9321-config-data\") pod \"placement-7f96457d78-wrrfr\" (UID: \"55398def-7876-49e4-9509-29374a5f9321\") " pod="openstack/placement-7f96457d78-wrrfr" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.589250 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55398def-7876-49e4-9509-29374a5f9321-logs\") pod \"placement-7f96457d78-wrrfr\" (UID: \"55398def-7876-49e4-9509-29374a5f9321\") " pod="openstack/placement-7f96457d78-wrrfr" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.594731 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55398def-7876-49e4-9509-29374a5f9321-combined-ca-bundle\") pod \"placement-7f96457d78-wrrfr\" (UID: \"55398def-7876-49e4-9509-29374a5f9321\") " pod="openstack/placement-7f96457d78-wrrfr" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.595039 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55398def-7876-49e4-9509-29374a5f9321-config-data\") pod \"placement-7f96457d78-wrrfr\" (UID: \"55398def-7876-49e4-9509-29374a5f9321\") " pod="openstack/placement-7f96457d78-wrrfr" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.598501 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55398def-7876-49e4-9509-29374a5f9321-public-tls-certs\") pod \"placement-7f96457d78-wrrfr\" (UID: \"55398def-7876-49e4-9509-29374a5f9321\") " pod="openstack/placement-7f96457d78-wrrfr" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.634824 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55398def-7876-49e4-9509-29374a5f9321-internal-tls-certs\") pod \"placement-7f96457d78-wrrfr\" (UID: \"55398def-7876-49e4-9509-29374a5f9321\") " pod="openstack/placement-7f96457d78-wrrfr" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.640428 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsd52\" (UniqueName: \"kubernetes.io/projected/55398def-7876-49e4-9509-29374a5f9321-kube-api-access-bsd52\") pod \"placement-7f96457d78-wrrfr\" (UID: \"55398def-7876-49e4-9509-29374a5f9321\") " pod="openstack/placement-7f96457d78-wrrfr" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.660381 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55398def-7876-49e4-9509-29374a5f9321-scripts\") pod \"placement-7f96457d78-wrrfr\" (UID: \"55398def-7876-49e4-9509-29374a5f9321\") " pod="openstack/placement-7f96457d78-wrrfr" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.687969 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cf275de-3442-4fe5-ab8b-a4796c0bc829-public-tls-certs\") pod \"keystone-5558665b54-mq2t5\" (UID: \"2cf275de-3442-4fe5-ab8b-a4796c0bc829\") " pod="openstack/keystone-5558665b54-mq2t5" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.688028 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cf275de-3442-4fe5-ab8b-a4796c0bc829-combined-ca-bundle\") pod \"keystone-5558665b54-mq2t5\" (UID: \"2cf275de-3442-4fe5-ab8b-a4796c0bc829\") " pod="openstack/keystone-5558665b54-mq2t5" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.688074 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cf275de-3442-4fe5-ab8b-a4796c0bc829-internal-tls-certs\") pod \"keystone-5558665b54-mq2t5\" (UID: \"2cf275de-3442-4fe5-ab8b-a4796c0bc829\") " pod="openstack/keystone-5558665b54-mq2t5" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.688123 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2cf275de-3442-4fe5-ab8b-a4796c0bc829-fernet-keys\") pod \"keystone-5558665b54-mq2t5\" (UID: \"2cf275de-3442-4fe5-ab8b-a4796c0bc829\") " pod="openstack/keystone-5558665b54-mq2t5" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.688234 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2cf275de-3442-4fe5-ab8b-a4796c0bc829-credential-keys\") pod \"keystone-5558665b54-mq2t5\" (UID: \"2cf275de-3442-4fe5-ab8b-a4796c0bc829\") " pod="openstack/keystone-5558665b54-mq2t5" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.688267 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cf275de-3442-4fe5-ab8b-a4796c0bc829-scripts\") pod \"keystone-5558665b54-mq2t5\" (UID: \"2cf275de-3442-4fe5-ab8b-a4796c0bc829\") " pod="openstack/keystone-5558665b54-mq2t5" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.688295 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cf275de-3442-4fe5-ab8b-a4796c0bc829-config-data\") pod \"keystone-5558665b54-mq2t5\" (UID: \"2cf275de-3442-4fe5-ab8b-a4796c0bc829\") " pod="openstack/keystone-5558665b54-mq2t5" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.688352 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfvwc\" (UniqueName: \"kubernetes.io/projected/2cf275de-3442-4fe5-ab8b-a4796c0bc829-kube-api-access-mfvwc\") pod \"keystone-5558665b54-mq2t5\" (UID: \"2cf275de-3442-4fe5-ab8b-a4796c0bc829\") " pod="openstack/keystone-5558665b54-mq2t5" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.693325 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cf275de-3442-4fe5-ab8b-a4796c0bc829-scripts\") pod \"keystone-5558665b54-mq2t5\" (UID: \"2cf275de-3442-4fe5-ab8b-a4796c0bc829\") " pod="openstack/keystone-5558665b54-mq2t5" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.699798 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cf275de-3442-4fe5-ab8b-a4796c0bc829-config-data\") pod \"keystone-5558665b54-mq2t5\" (UID: \"2cf275de-3442-4fe5-ab8b-a4796c0bc829\") " pod="openstack/keystone-5558665b54-mq2t5" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.702440 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cf275de-3442-4fe5-ab8b-a4796c0bc829-combined-ca-bundle\") pod \"keystone-5558665b54-mq2t5\" (UID: \"2cf275de-3442-4fe5-ab8b-a4796c0bc829\") " pod="openstack/keystone-5558665b54-mq2t5" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.702567 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cf275de-3442-4fe5-ab8b-a4796c0bc829-public-tls-certs\") pod \"keystone-5558665b54-mq2t5\" (UID: \"2cf275de-3442-4fe5-ab8b-a4796c0bc829\") " pod="openstack/keystone-5558665b54-mq2t5" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.702832 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2cf275de-3442-4fe5-ab8b-a4796c0bc829-credential-keys\") pod \"keystone-5558665b54-mq2t5\" (UID: \"2cf275de-3442-4fe5-ab8b-a4796c0bc829\") " pod="openstack/keystone-5558665b54-mq2t5" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.703134 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2cf275de-3442-4fe5-ab8b-a4796c0bc829-fernet-keys\") pod \"keystone-5558665b54-mq2t5\" (UID: \"2cf275de-3442-4fe5-ab8b-a4796c0bc829\") " pod="openstack/keystone-5558665b54-mq2t5" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.708673 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cf275de-3442-4fe5-ab8b-a4796c0bc829-internal-tls-certs\") pod \"keystone-5558665b54-mq2t5\" (UID: \"2cf275de-3442-4fe5-ab8b-a4796c0bc829\") " pod="openstack/keystone-5558665b54-mq2t5" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.714037 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfvwc\" (UniqueName: \"kubernetes.io/projected/2cf275de-3442-4fe5-ab8b-a4796c0bc829-kube-api-access-mfvwc\") pod \"keystone-5558665b54-mq2t5\" (UID: \"2cf275de-3442-4fe5-ab8b-a4796c0bc829\") " pod="openstack/keystone-5558665b54-mq2t5" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.722417 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7f96457d78-wrrfr" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.739457 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.739513 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.739526 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.739538 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.793018 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.793735 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 31 04:07:34 crc kubenswrapper[4667]: I0131 04:07:34.798924 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5558665b54-mq2t5" Jan 31 04:07:35 crc kubenswrapper[4667]: I0131 04:07:35.300892 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0e678e9-3d71-4c27-a179-f4fdd1515701" path="/var/lib/kubelet/pods/a0e678e9-3d71-4c27-a179-f4fdd1515701/volumes" Jan 31 04:07:35 crc kubenswrapper[4667]: I0131 04:07:35.333205 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7f96457d78-wrrfr"] Jan 31 04:07:35 crc kubenswrapper[4667]: W0131 04:07:35.362684 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55398def_7876_49e4_9509_29374a5f9321.slice/crio-49c0a3d00150cd561fe37c24f2eacf950be54cda5c096d6630d6856a10036fed WatchSource:0}: Error finding container 49c0a3d00150cd561fe37c24f2eacf950be54cda5c096d6630d6856a10036fed: Status 404 returned error can't find the container with id 49c0a3d00150cd561fe37c24f2eacf950be54cda5c096d6630d6856a10036fed Jan 31 04:07:35 crc kubenswrapper[4667]: I0131 04:07:35.529709 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5558665b54-mq2t5"] Jan 31 04:07:36 crc kubenswrapper[4667]: I0131 04:07:36.223971 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5558665b54-mq2t5" event={"ID":"2cf275de-3442-4fe5-ab8b-a4796c0bc829","Type":"ContainerStarted","Data":"2427b96386fe770b0d47138aef54ed2bc261084476e4f5df6cdf6bd5e9e3e433"} Jan 31 04:07:36 crc kubenswrapper[4667]: I0131 04:07:36.224574 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5558665b54-mq2t5" event={"ID":"2cf275de-3442-4fe5-ab8b-a4796c0bc829","Type":"ContainerStarted","Data":"c7dc1f636133fc8600861e324311c919d5562a1a27bc01831792ebc9432c56cf"} Jan 31 04:07:36 crc kubenswrapper[4667]: I0131 04:07:36.235459 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f96457d78-wrrfr" event={"ID":"55398def-7876-49e4-9509-29374a5f9321","Type":"ContainerStarted","Data":"43f2c08ab8c7d46bfc3dcd7a50c24313f1a272150df709c5ef0da7ed110d9159"} Jan 31 04:07:36 crc kubenswrapper[4667]: I0131 04:07:36.235744 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f96457d78-wrrfr" event={"ID":"55398def-7876-49e4-9509-29374a5f9321","Type":"ContainerStarted","Data":"49c0a3d00150cd561fe37c24f2eacf950be54cda5c096d6630d6856a10036fed"} Jan 31 04:07:37 crc kubenswrapper[4667]: I0131 04:07:37.337137 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f96457d78-wrrfr" event={"ID":"55398def-7876-49e4-9509-29374a5f9321","Type":"ContainerStarted","Data":"93c9f813d28740047f5f15de7f6fe09cb14534a8f751e97820605552bd6ca35c"} Jan 31 04:07:37 crc kubenswrapper[4667]: I0131 04:07:37.337792 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7f96457d78-wrrfr" Jan 31 04:07:37 crc kubenswrapper[4667]: I0131 04:07:37.337901 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5558665b54-mq2t5" Jan 31 04:07:37 crc kubenswrapper[4667]: I0131 04:07:37.337917 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7f96457d78-wrrfr" Jan 31 04:07:37 crc kubenswrapper[4667]: I0131 04:07:37.458614 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5558665b54-mq2t5" podStartSLOduration=3.458585937 podStartE2EDuration="3.458585937s" podCreationTimestamp="2026-01-31 04:07:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:07:37.417492548 +0000 UTC m=+1180.933827847" watchObservedRunningTime="2026-01-31 04:07:37.458585937 +0000 UTC m=+1180.974921236" Jan 31 04:07:37 crc kubenswrapper[4667]: I0131 04:07:37.515317 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7f96457d78-wrrfr" podStartSLOduration=3.515274969 podStartE2EDuration="3.515274969s" podCreationTimestamp="2026-01-31 04:07:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:07:37.452454624 +0000 UTC m=+1180.968789933" watchObservedRunningTime="2026-01-31 04:07:37.515274969 +0000 UTC m=+1181.031610268" Jan 31 04:07:39 crc kubenswrapper[4667]: I0131 04:07:39.373690 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-245c9" event={"ID":"bc9db8ae-2f60-4efd-9a11-4aac5f336900","Type":"ContainerStarted","Data":"4c202cc92d71eb2da32ef44d2be02ac9d3194bb7b9c71a71d07945f8521093e4"} Jan 31 04:07:39 crc kubenswrapper[4667]: I0131 04:07:39.398425 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-245c9" podStartSLOduration=4.717989924 podStartE2EDuration="1m0.398395409s" podCreationTimestamp="2026-01-31 04:06:39 +0000 UTC" firstStartedPulling="2026-01-31 04:06:41.227076158 +0000 UTC m=+1124.743411457" lastFinishedPulling="2026-01-31 04:07:36.907481643 +0000 UTC m=+1180.423816942" observedRunningTime="2026-01-31 04:07:39.392564715 +0000 UTC m=+1182.908900014" watchObservedRunningTime="2026-01-31 04:07:39.398395409 +0000 UTC m=+1182.914730708" Jan 31 04:07:40 crc kubenswrapper[4667]: I0131 04:07:40.613537 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 31 04:07:40 crc kubenswrapper[4667]: I0131 04:07:40.613697 4667 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 04:07:40 crc kubenswrapper[4667]: I0131 04:07:40.633689 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 31 04:07:41 crc kubenswrapper[4667]: I0131 04:07:41.031828 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 31 04:07:41 crc kubenswrapper[4667]: I0131 04:07:41.032047 4667 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 04:07:41 crc kubenswrapper[4667]: I0131 04:07:41.286340 4667 scope.go:117] "RemoveContainer" containerID="87736fd513713fd560a94d7e15278e3343582af16953e4d734cf6cb3d54b555e" Jan 31 04:07:41 crc kubenswrapper[4667]: I0131 04:07:41.395520 4667 generic.go:334] "Generic (PLEG): container finished" podID="7b6bac61-1103-438b-9e75-f3d6b6902270" containerID="0b580b6dd82d2193fa73ab8fbf259431448619ef80802ef62c949f8363ee652d" exitCode=0 Jan 31 04:07:41 crc kubenswrapper[4667]: I0131 04:07:41.395847 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4nj2p" event={"ID":"7b6bac61-1103-438b-9e75-f3d6b6902270","Type":"ContainerDied","Data":"0b580b6dd82d2193fa73ab8fbf259431448619ef80802ef62c949f8363ee652d"} Jan 31 04:07:41 crc kubenswrapper[4667]: I0131 04:07:41.471510 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 31 04:07:41 crc kubenswrapper[4667]: I0131 04:07:41.757074 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-78789d8f44-5trmc" podUID="b7f8fd18-06a0-432e-8c17-c9b432b6ca69" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 31 04:07:41 crc kubenswrapper[4667]: I0131 04:07:41.846363 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-86c748c4d6-2grmh" podUID="c6974567-3bea-447a-bb8b-ced22b6d34ce" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Jan 31 04:07:42 crc kubenswrapper[4667]: I0131 04:07:42.409183 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-746c944c96-t4g84_43842154-1666-491b-b37a-061c1a7c2b90/neutron-httpd/2.log" Jan 31 04:07:42 crc kubenswrapper[4667]: I0131 04:07:42.410065 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-746c944c96-t4g84_43842154-1666-491b-b37a-061c1a7c2b90/neutron-httpd/1.log" Jan 31 04:07:42 crc kubenswrapper[4667]: I0131 04:07:42.410522 4667 generic.go:334] "Generic (PLEG): container finished" podID="43842154-1666-491b-b37a-061c1a7c2b90" containerID="0ac01a4ee14d592410a8439010a709e1191bfd2ccef6b725b09419143daee58c" exitCode=1 Jan 31 04:07:42 crc kubenswrapper[4667]: I0131 04:07:42.410602 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-746c944c96-t4g84" event={"ID":"43842154-1666-491b-b37a-061c1a7c2b90","Type":"ContainerDied","Data":"0ac01a4ee14d592410a8439010a709e1191bfd2ccef6b725b09419143daee58c"} Jan 31 04:07:42 crc kubenswrapper[4667]: I0131 04:07:42.410675 4667 scope.go:117] "RemoveContainer" containerID="87736fd513713fd560a94d7e15278e3343582af16953e4d734cf6cb3d54b555e" Jan 31 04:07:42 crc kubenswrapper[4667]: I0131 04:07:42.412068 4667 scope.go:117] "RemoveContainer" containerID="0ac01a4ee14d592410a8439010a709e1191bfd2ccef6b725b09419143daee58c" Jan 31 04:07:42 crc kubenswrapper[4667]: E0131 04:07:42.412387 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=neutron-httpd pod=neutron-746c944c96-t4g84_openstack(43842154-1666-491b-b37a-061c1a7c2b90)\"" pod="openstack/neutron-746c944c96-t4g84" podUID="43842154-1666-491b-b37a-061c1a7c2b90" Jan 31 04:07:43 crc kubenswrapper[4667]: I0131 04:07:43.471728 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-746c944c96-t4g84_43842154-1666-491b-b37a-061c1a7c2b90/neutron-httpd/2.log" Jan 31 04:07:46 crc kubenswrapper[4667]: I0131 04:07:46.510329 4667 generic.go:334] "Generic (PLEG): container finished" podID="bc9db8ae-2f60-4efd-9a11-4aac5f336900" containerID="4c202cc92d71eb2da32ef44d2be02ac9d3194bb7b9c71a71d07945f8521093e4" exitCode=0 Jan 31 04:07:46 crc kubenswrapper[4667]: I0131 04:07:46.510779 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-245c9" event={"ID":"bc9db8ae-2f60-4efd-9a11-4aac5f336900","Type":"ContainerDied","Data":"4c202cc92d71eb2da32ef44d2be02ac9d3194bb7b9c71a71d07945f8521093e4"} Jan 31 04:07:48 crc kubenswrapper[4667]: I0131 04:07:48.044945 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-245c9" Jan 31 04:07:48 crc kubenswrapper[4667]: I0131 04:07:48.126232 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bc9db8ae-2f60-4efd-9a11-4aac5f336900-db-sync-config-data\") pod \"bc9db8ae-2f60-4efd-9a11-4aac5f336900\" (UID: \"bc9db8ae-2f60-4efd-9a11-4aac5f336900\") " Jan 31 04:07:48 crc kubenswrapper[4667]: I0131 04:07:48.126542 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc9db8ae-2f60-4efd-9a11-4aac5f336900-config-data\") pod \"bc9db8ae-2f60-4efd-9a11-4aac5f336900\" (UID: \"bc9db8ae-2f60-4efd-9a11-4aac5f336900\") " Jan 31 04:07:48 crc kubenswrapper[4667]: I0131 04:07:48.134583 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc9db8ae-2f60-4efd-9a11-4aac5f336900-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "bc9db8ae-2f60-4efd-9a11-4aac5f336900" (UID: "bc9db8ae-2f60-4efd-9a11-4aac5f336900"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:48 crc kubenswrapper[4667]: I0131 04:07:48.221013 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc9db8ae-2f60-4efd-9a11-4aac5f336900-config-data" (OuterVolumeSpecName: "config-data") pod "bc9db8ae-2f60-4efd-9a11-4aac5f336900" (UID: "bc9db8ae-2f60-4efd-9a11-4aac5f336900"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:48 crc kubenswrapper[4667]: I0131 04:07:48.228490 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc9db8ae-2f60-4efd-9a11-4aac5f336900-combined-ca-bundle\") pod \"bc9db8ae-2f60-4efd-9a11-4aac5f336900\" (UID: \"bc9db8ae-2f60-4efd-9a11-4aac5f336900\") " Jan 31 04:07:48 crc kubenswrapper[4667]: I0131 04:07:48.228699 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc9db8ae-2f60-4efd-9a11-4aac5f336900-scripts\") pod \"bc9db8ae-2f60-4efd-9a11-4aac5f336900\" (UID: \"bc9db8ae-2f60-4efd-9a11-4aac5f336900\") " Jan 31 04:07:48 crc kubenswrapper[4667]: I0131 04:07:48.228769 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gtrc\" (UniqueName: \"kubernetes.io/projected/bc9db8ae-2f60-4efd-9a11-4aac5f336900-kube-api-access-6gtrc\") pod \"bc9db8ae-2f60-4efd-9a11-4aac5f336900\" (UID: \"bc9db8ae-2f60-4efd-9a11-4aac5f336900\") " Jan 31 04:07:48 crc kubenswrapper[4667]: I0131 04:07:48.228785 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bc9db8ae-2f60-4efd-9a11-4aac5f336900-etc-machine-id\") pod \"bc9db8ae-2f60-4efd-9a11-4aac5f336900\" (UID: \"bc9db8ae-2f60-4efd-9a11-4aac5f336900\") " Jan 31 04:07:48 crc kubenswrapper[4667]: I0131 04:07:48.229273 4667 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bc9db8ae-2f60-4efd-9a11-4aac5f336900-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:48 crc kubenswrapper[4667]: I0131 04:07:48.229284 4667 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc9db8ae-2f60-4efd-9a11-4aac5f336900-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:48 crc kubenswrapper[4667]: I0131 04:07:48.229334 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc9db8ae-2f60-4efd-9a11-4aac5f336900-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "bc9db8ae-2f60-4efd-9a11-4aac5f336900" (UID: "bc9db8ae-2f60-4efd-9a11-4aac5f336900"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:07:48 crc kubenswrapper[4667]: I0131 04:07:48.253986 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc9db8ae-2f60-4efd-9a11-4aac5f336900-scripts" (OuterVolumeSpecName: "scripts") pod "bc9db8ae-2f60-4efd-9a11-4aac5f336900" (UID: "bc9db8ae-2f60-4efd-9a11-4aac5f336900"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:48 crc kubenswrapper[4667]: I0131 04:07:48.261146 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc9db8ae-2f60-4efd-9a11-4aac5f336900-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc9db8ae-2f60-4efd-9a11-4aac5f336900" (UID: "bc9db8ae-2f60-4efd-9a11-4aac5f336900"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:48 crc kubenswrapper[4667]: I0131 04:07:48.264054 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc9db8ae-2f60-4efd-9a11-4aac5f336900-kube-api-access-6gtrc" (OuterVolumeSpecName: "kube-api-access-6gtrc") pod "bc9db8ae-2f60-4efd-9a11-4aac5f336900" (UID: "bc9db8ae-2f60-4efd-9a11-4aac5f336900"). InnerVolumeSpecName "kube-api-access-6gtrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:07:48 crc kubenswrapper[4667]: I0131 04:07:48.332208 4667 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc9db8ae-2f60-4efd-9a11-4aac5f336900-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:48 crc kubenswrapper[4667]: I0131 04:07:48.332256 4667 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc9db8ae-2f60-4efd-9a11-4aac5f336900-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:48 crc kubenswrapper[4667]: I0131 04:07:48.332267 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gtrc\" (UniqueName: \"kubernetes.io/projected/bc9db8ae-2f60-4efd-9a11-4aac5f336900-kube-api-access-6gtrc\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:48 crc kubenswrapper[4667]: I0131 04:07:48.332276 4667 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bc9db8ae-2f60-4efd-9a11-4aac5f336900-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:48 crc kubenswrapper[4667]: I0131 04:07:48.563478 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-245c9" event={"ID":"bc9db8ae-2f60-4efd-9a11-4aac5f336900","Type":"ContainerDied","Data":"33743d1499cdabcf6f82930a3464c7d322aa057fd1e661d13bba920192c5232c"} Jan 31 04:07:48 crc kubenswrapper[4667]: I0131 04:07:48.563524 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33743d1499cdabcf6f82930a3464c7d322aa057fd1e661d13bba920192c5232c" Jan 31 04:07:48 crc kubenswrapper[4667]: I0131 04:07:48.563592 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-245c9" Jan 31 04:07:48 crc kubenswrapper[4667]: I0131 04:07:48.890435 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 04:07:48 crc kubenswrapper[4667]: E0131 04:07:48.890989 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc9db8ae-2f60-4efd-9a11-4aac5f336900" containerName="cinder-db-sync" Jan 31 04:07:48 crc kubenswrapper[4667]: I0131 04:07:48.891036 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc9db8ae-2f60-4efd-9a11-4aac5f336900" containerName="cinder-db-sync" Jan 31 04:07:48 crc kubenswrapper[4667]: I0131 04:07:48.891250 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc9db8ae-2f60-4efd-9a11-4aac5f336900" containerName="cinder-db-sync" Jan 31 04:07:48 crc kubenswrapper[4667]: I0131 04:07:48.892401 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 04:07:48 crc kubenswrapper[4667]: I0131 04:07:48.900434 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 04:07:48 crc kubenswrapper[4667]: I0131 04:07:48.903159 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 31 04:07:48 crc kubenswrapper[4667]: I0131 04:07:48.903208 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 31 04:07:48 crc kubenswrapper[4667]: I0131 04:07:48.903238 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 31 04:07:48 crc kubenswrapper[4667]: I0131 04:07:48.903274 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-9b928" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.047191 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/847c3d86-2c1d-4b19-9558-5c03c65e1539-config-data\") pod \"cinder-scheduler-0\" (UID: \"847c3d86-2c1d-4b19-9558-5c03c65e1539\") " pod="openstack/cinder-scheduler-0" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.047275 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ckgn\" (UniqueName: \"kubernetes.io/projected/847c3d86-2c1d-4b19-9558-5c03c65e1539-kube-api-access-2ckgn\") pod \"cinder-scheduler-0\" (UID: \"847c3d86-2c1d-4b19-9558-5c03c65e1539\") " pod="openstack/cinder-scheduler-0" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.047331 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/847c3d86-2c1d-4b19-9558-5c03c65e1539-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"847c3d86-2c1d-4b19-9558-5c03c65e1539\") " pod="openstack/cinder-scheduler-0" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.047350 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/847c3d86-2c1d-4b19-9558-5c03c65e1539-scripts\") pod \"cinder-scheduler-0\" (UID: \"847c3d86-2c1d-4b19-9558-5c03c65e1539\") " pod="openstack/cinder-scheduler-0" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.047369 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/847c3d86-2c1d-4b19-9558-5c03c65e1539-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"847c3d86-2c1d-4b19-9558-5c03c65e1539\") " pod="openstack/cinder-scheduler-0" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.047396 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/847c3d86-2c1d-4b19-9558-5c03c65e1539-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"847c3d86-2c1d-4b19-9558-5c03c65e1539\") " pod="openstack/cinder-scheduler-0" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.054764 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b895b5785-qpl4b"] Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.070029 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b895b5785-qpl4b" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.095771 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b895b5785-qpl4b"] Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.150529 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/847c3d86-2c1d-4b19-9558-5c03c65e1539-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"847c3d86-2c1d-4b19-9558-5c03c65e1539\") " pod="openstack/cinder-scheduler-0" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.150589 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/847c3d86-2c1d-4b19-9558-5c03c65e1539-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"847c3d86-2c1d-4b19-9558-5c03c65e1539\") " pod="openstack/cinder-scheduler-0" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.150661 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/847c3d86-2c1d-4b19-9558-5c03c65e1539-config-data\") pod \"cinder-scheduler-0\" (UID: \"847c3d86-2c1d-4b19-9558-5c03c65e1539\") " pod="openstack/cinder-scheduler-0" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.150714 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ckgn\" (UniqueName: \"kubernetes.io/projected/847c3d86-2c1d-4b19-9558-5c03c65e1539-kube-api-access-2ckgn\") pod \"cinder-scheduler-0\" (UID: \"847c3d86-2c1d-4b19-9558-5c03c65e1539\") " pod="openstack/cinder-scheduler-0" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.150766 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/847c3d86-2c1d-4b19-9558-5c03c65e1539-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"847c3d86-2c1d-4b19-9558-5c03c65e1539\") " pod="openstack/cinder-scheduler-0" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.150787 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/847c3d86-2c1d-4b19-9558-5c03c65e1539-scripts\") pod \"cinder-scheduler-0\" (UID: \"847c3d86-2c1d-4b19-9558-5c03c65e1539\") " pod="openstack/cinder-scheduler-0" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.151478 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/847c3d86-2c1d-4b19-9558-5c03c65e1539-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"847c3d86-2c1d-4b19-9558-5c03c65e1539\") " pod="openstack/cinder-scheduler-0" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.155831 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/847c3d86-2c1d-4b19-9558-5c03c65e1539-scripts\") pod \"cinder-scheduler-0\" (UID: \"847c3d86-2c1d-4b19-9558-5c03c65e1539\") " pod="openstack/cinder-scheduler-0" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.157383 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/847c3d86-2c1d-4b19-9558-5c03c65e1539-config-data\") pod \"cinder-scheduler-0\" (UID: \"847c3d86-2c1d-4b19-9558-5c03c65e1539\") " pod="openstack/cinder-scheduler-0" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.157686 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/847c3d86-2c1d-4b19-9558-5c03c65e1539-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"847c3d86-2c1d-4b19-9558-5c03c65e1539\") " pod="openstack/cinder-scheduler-0" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.186877 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ckgn\" (UniqueName: \"kubernetes.io/projected/847c3d86-2c1d-4b19-9558-5c03c65e1539-kube-api-access-2ckgn\") pod \"cinder-scheduler-0\" (UID: \"847c3d86-2c1d-4b19-9558-5c03c65e1539\") " pod="openstack/cinder-scheduler-0" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.193994 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/847c3d86-2c1d-4b19-9558-5c03c65e1539-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"847c3d86-2c1d-4b19-9558-5c03c65e1539\") " pod="openstack/cinder-scheduler-0" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.251186 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.254172 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgfgp\" (UniqueName: \"kubernetes.io/projected/6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2-kube-api-access-rgfgp\") pod \"dnsmasq-dns-b895b5785-qpl4b\" (UID: \"6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2\") " pod="openstack/dnsmasq-dns-b895b5785-qpl4b" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.254225 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2-dns-swift-storage-0\") pod \"dnsmasq-dns-b895b5785-qpl4b\" (UID: \"6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2\") " pod="openstack/dnsmasq-dns-b895b5785-qpl4b" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.254272 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2-ovsdbserver-nb\") pod \"dnsmasq-dns-b895b5785-qpl4b\" (UID: \"6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2\") " pod="openstack/dnsmasq-dns-b895b5785-qpl4b" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.254308 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2-dns-svc\") pod \"dnsmasq-dns-b895b5785-qpl4b\" (UID: \"6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2\") " pod="openstack/dnsmasq-dns-b895b5785-qpl4b" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.254386 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2-ovsdbserver-sb\") pod \"dnsmasq-dns-b895b5785-qpl4b\" (UID: \"6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2\") " pod="openstack/dnsmasq-dns-b895b5785-qpl4b" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.254406 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2-config\") pod \"dnsmasq-dns-b895b5785-qpl4b\" (UID: \"6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2\") " pod="openstack/dnsmasq-dns-b895b5785-qpl4b" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.358292 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgfgp\" (UniqueName: \"kubernetes.io/projected/6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2-kube-api-access-rgfgp\") pod \"dnsmasq-dns-b895b5785-qpl4b\" (UID: \"6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2\") " pod="openstack/dnsmasq-dns-b895b5785-qpl4b" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.358543 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2-dns-swift-storage-0\") pod \"dnsmasq-dns-b895b5785-qpl4b\" (UID: \"6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2\") " pod="openstack/dnsmasq-dns-b895b5785-qpl4b" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.358672 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2-ovsdbserver-nb\") pod \"dnsmasq-dns-b895b5785-qpl4b\" (UID: \"6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2\") " pod="openstack/dnsmasq-dns-b895b5785-qpl4b" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.358832 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2-dns-svc\") pod \"dnsmasq-dns-b895b5785-qpl4b\" (UID: \"6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2\") " pod="openstack/dnsmasq-dns-b895b5785-qpl4b" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.358980 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2-ovsdbserver-sb\") pod \"dnsmasq-dns-b895b5785-qpl4b\" (UID: \"6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2\") " pod="openstack/dnsmasq-dns-b895b5785-qpl4b" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.359058 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2-config\") pod \"dnsmasq-dns-b895b5785-qpl4b\" (UID: \"6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2\") " pod="openstack/dnsmasq-dns-b895b5785-qpl4b" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.360182 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2-config\") pod \"dnsmasq-dns-b895b5785-qpl4b\" (UID: \"6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2\") " pod="openstack/dnsmasq-dns-b895b5785-qpl4b" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.361201 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2-dns-swift-storage-0\") pod \"dnsmasq-dns-b895b5785-qpl4b\" (UID: \"6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2\") " pod="openstack/dnsmasq-dns-b895b5785-qpl4b" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.366423 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2-ovsdbserver-nb\") pod \"dnsmasq-dns-b895b5785-qpl4b\" (UID: \"6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2\") " pod="openstack/dnsmasq-dns-b895b5785-qpl4b" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.367065 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2-dns-svc\") pod \"dnsmasq-dns-b895b5785-qpl4b\" (UID: \"6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2\") " pod="openstack/dnsmasq-dns-b895b5785-qpl4b" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.368246 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2-ovsdbserver-sb\") pod \"dnsmasq-dns-b895b5785-qpl4b\" (UID: \"6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2\") " pod="openstack/dnsmasq-dns-b895b5785-qpl4b" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.402312 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.412636 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.417358 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.418332 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.420738 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgfgp\" (UniqueName: \"kubernetes.io/projected/6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2-kube-api-access-rgfgp\") pod \"dnsmasq-dns-b895b5785-qpl4b\" (UID: \"6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2\") " pod="openstack/dnsmasq-dns-b895b5785-qpl4b" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.567746 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29617aa3-f0e0-4528-9ba6-1385314227d9-config-data\") pod \"cinder-api-0\" (UID: \"29617aa3-f0e0-4528-9ba6-1385314227d9\") " pod="openstack/cinder-api-0" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.567861 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29617aa3-f0e0-4528-9ba6-1385314227d9-config-data-custom\") pod \"cinder-api-0\" (UID: \"29617aa3-f0e0-4528-9ba6-1385314227d9\") " pod="openstack/cinder-api-0" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.567891 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29617aa3-f0e0-4528-9ba6-1385314227d9-logs\") pod \"cinder-api-0\" (UID: \"29617aa3-f0e0-4528-9ba6-1385314227d9\") " pod="openstack/cinder-api-0" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.567933 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29617aa3-f0e0-4528-9ba6-1385314227d9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"29617aa3-f0e0-4528-9ba6-1385314227d9\") " pod="openstack/cinder-api-0" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.567988 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29617aa3-f0e0-4528-9ba6-1385314227d9-scripts\") pod \"cinder-api-0\" (UID: \"29617aa3-f0e0-4528-9ba6-1385314227d9\") " pod="openstack/cinder-api-0" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.568059 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29617aa3-f0e0-4528-9ba6-1385314227d9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"29617aa3-f0e0-4528-9ba6-1385314227d9\") " pod="openstack/cinder-api-0" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.568084 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9c6r\" (UniqueName: \"kubernetes.io/projected/29617aa3-f0e0-4528-9ba6-1385314227d9-kube-api-access-l9c6r\") pod \"cinder-api-0\" (UID: \"29617aa3-f0e0-4528-9ba6-1385314227d9\") " pod="openstack/cinder-api-0" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.669732 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29617aa3-f0e0-4528-9ba6-1385314227d9-scripts\") pod \"cinder-api-0\" (UID: \"29617aa3-f0e0-4528-9ba6-1385314227d9\") " pod="openstack/cinder-api-0" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.669819 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29617aa3-f0e0-4528-9ba6-1385314227d9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"29617aa3-f0e0-4528-9ba6-1385314227d9\") " pod="openstack/cinder-api-0" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.669876 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9c6r\" (UniqueName: \"kubernetes.io/projected/29617aa3-f0e0-4528-9ba6-1385314227d9-kube-api-access-l9c6r\") pod \"cinder-api-0\" (UID: \"29617aa3-f0e0-4528-9ba6-1385314227d9\") " pod="openstack/cinder-api-0" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.669934 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29617aa3-f0e0-4528-9ba6-1385314227d9-config-data\") pod \"cinder-api-0\" (UID: \"29617aa3-f0e0-4528-9ba6-1385314227d9\") " pod="openstack/cinder-api-0" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.670004 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29617aa3-f0e0-4528-9ba6-1385314227d9-config-data-custom\") pod \"cinder-api-0\" (UID: \"29617aa3-f0e0-4528-9ba6-1385314227d9\") " pod="openstack/cinder-api-0" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.670024 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29617aa3-f0e0-4528-9ba6-1385314227d9-logs\") pod \"cinder-api-0\" (UID: \"29617aa3-f0e0-4528-9ba6-1385314227d9\") " pod="openstack/cinder-api-0" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.670056 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29617aa3-f0e0-4528-9ba6-1385314227d9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"29617aa3-f0e0-4528-9ba6-1385314227d9\") " pod="openstack/cinder-api-0" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.670148 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29617aa3-f0e0-4528-9ba6-1385314227d9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"29617aa3-f0e0-4528-9ba6-1385314227d9\") " pod="openstack/cinder-api-0" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.671035 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29617aa3-f0e0-4528-9ba6-1385314227d9-logs\") pod \"cinder-api-0\" (UID: \"29617aa3-f0e0-4528-9ba6-1385314227d9\") " pod="openstack/cinder-api-0" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.676339 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29617aa3-f0e0-4528-9ba6-1385314227d9-scripts\") pod \"cinder-api-0\" (UID: \"29617aa3-f0e0-4528-9ba6-1385314227d9\") " pod="openstack/cinder-api-0" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.679057 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29617aa3-f0e0-4528-9ba6-1385314227d9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"29617aa3-f0e0-4528-9ba6-1385314227d9\") " pod="openstack/cinder-api-0" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.681156 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29617aa3-f0e0-4528-9ba6-1385314227d9-config-data\") pod \"cinder-api-0\" (UID: \"29617aa3-f0e0-4528-9ba6-1385314227d9\") " pod="openstack/cinder-api-0" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.699609 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29617aa3-f0e0-4528-9ba6-1385314227d9-config-data-custom\") pod \"cinder-api-0\" (UID: \"29617aa3-f0e0-4528-9ba6-1385314227d9\") " pod="openstack/cinder-api-0" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.703147 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9c6r\" (UniqueName: \"kubernetes.io/projected/29617aa3-f0e0-4528-9ba6-1385314227d9-kube-api-access-l9c6r\") pod \"cinder-api-0\" (UID: \"29617aa3-f0e0-4528-9ba6-1385314227d9\") " pod="openstack/cinder-api-0" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.707774 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b895b5785-qpl4b" Jan 31 04:07:49 crc kubenswrapper[4667]: I0131 04:07:49.766352 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 04:07:50 crc kubenswrapper[4667]: I0131 04:07:50.120389 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4nj2p" Jan 31 04:07:50 crc kubenswrapper[4667]: I0131 04:07:50.282318 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6t7c\" (UniqueName: \"kubernetes.io/projected/7b6bac61-1103-438b-9e75-f3d6b6902270-kube-api-access-x6t7c\") pod \"7b6bac61-1103-438b-9e75-f3d6b6902270\" (UID: \"7b6bac61-1103-438b-9e75-f3d6b6902270\") " Jan 31 04:07:50 crc kubenswrapper[4667]: I0131 04:07:50.282978 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b6bac61-1103-438b-9e75-f3d6b6902270-combined-ca-bundle\") pod \"7b6bac61-1103-438b-9e75-f3d6b6902270\" (UID: \"7b6bac61-1103-438b-9e75-f3d6b6902270\") " Jan 31 04:07:50 crc kubenswrapper[4667]: I0131 04:07:50.283017 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7b6bac61-1103-438b-9e75-f3d6b6902270-db-sync-config-data\") pod \"7b6bac61-1103-438b-9e75-f3d6b6902270\" (UID: \"7b6bac61-1103-438b-9e75-f3d6b6902270\") " Jan 31 04:07:50 crc kubenswrapper[4667]: I0131 04:07:50.292103 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b6bac61-1103-438b-9e75-f3d6b6902270-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "7b6bac61-1103-438b-9e75-f3d6b6902270" (UID: "7b6bac61-1103-438b-9e75-f3d6b6902270"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:50 crc kubenswrapper[4667]: I0131 04:07:50.301085 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b6bac61-1103-438b-9e75-f3d6b6902270-kube-api-access-x6t7c" (OuterVolumeSpecName: "kube-api-access-x6t7c") pod "7b6bac61-1103-438b-9e75-f3d6b6902270" (UID: "7b6bac61-1103-438b-9e75-f3d6b6902270"). InnerVolumeSpecName "kube-api-access-x6t7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:07:50 crc kubenswrapper[4667]: I0131 04:07:50.312950 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b6bac61-1103-438b-9e75-f3d6b6902270-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b6bac61-1103-438b-9e75-f3d6b6902270" (UID: "7b6bac61-1103-438b-9e75-f3d6b6902270"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:07:50 crc kubenswrapper[4667]: I0131 04:07:50.385473 4667 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b6bac61-1103-438b-9e75-f3d6b6902270-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:50 crc kubenswrapper[4667]: I0131 04:07:50.385528 4667 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7b6bac61-1103-438b-9e75-f3d6b6902270-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:50 crc kubenswrapper[4667]: I0131 04:07:50.385541 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6t7c\" (UniqueName: \"kubernetes.io/projected/7b6bac61-1103-438b-9e75-f3d6b6902270-kube-api-access-x6t7c\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:50 crc kubenswrapper[4667]: I0131 04:07:50.489792 4667 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/neutron-746c944c96-t4g84" Jan 31 04:07:50 crc kubenswrapper[4667]: I0131 04:07:50.490384 4667 scope.go:117] "RemoveContainer" containerID="0ac01a4ee14d592410a8439010a709e1191bfd2ccef6b725b09419143daee58c" Jan 31 04:07:50 crc kubenswrapper[4667]: E0131 04:07:50.490589 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=neutron-httpd pod=neutron-746c944c96-t4g84_openstack(43842154-1666-491b-b37a-061c1a7c2b90)\"" pod="openstack/neutron-746c944c96-t4g84" podUID="43842154-1666-491b-b37a-061c1a7c2b90" Jan 31 04:07:50 crc kubenswrapper[4667]: I0131 04:07:50.492400 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-746c944c96-t4g84" Jan 31 04:07:50 crc kubenswrapper[4667]: I0131 04:07:50.495743 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-746c944c96-t4g84" podUID="43842154-1666-491b-b37a-061c1a7c2b90" containerName="neutron-api" probeResult="failure" output="Get \"http://10.217.0.156:9696/\": dial tcp 10.217.0.156:9696: connect: connection refused" Jan 31 04:07:50 crc kubenswrapper[4667]: I0131 04:07:50.654617 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4nj2p" Jan 31 04:07:50 crc kubenswrapper[4667]: I0131 04:07:50.655015 4667 scope.go:117] "RemoveContainer" containerID="0ac01a4ee14d592410a8439010a709e1191bfd2ccef6b725b09419143daee58c" Jan 31 04:07:50 crc kubenswrapper[4667]: E0131 04:07:50.655682 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=neutron-httpd pod=neutron-746c944c96-t4g84_openstack(43842154-1666-491b-b37a-061c1a7c2b90)\"" pod="openstack/neutron-746c944c96-t4g84" podUID="43842154-1666-491b-b37a-061c1a7c2b90" Jan 31 04:07:50 crc kubenswrapper[4667]: I0131 04:07:50.655036 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4nj2p" event={"ID":"7b6bac61-1103-438b-9e75-f3d6b6902270","Type":"ContainerDied","Data":"8a13527100341ec328f0ca96a57e3e425c0751023ece72646847c5d0b46270d3"} Jan 31 04:07:50 crc kubenswrapper[4667]: I0131 04:07:50.655730 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a13527100341ec328f0ca96a57e3e425c0751023ece72646847c5d0b46270d3" Jan 31 04:07:50 crc kubenswrapper[4667]: E0131 04:07:50.850130 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="c175848a-4645-42e7-8ccc-ab873e1ff7aa" Jan 31 04:07:51 crc kubenswrapper[4667]: I0131 04:07:51.106160 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 04:07:51 crc kubenswrapper[4667]: I0131 04:07:51.279779 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 31 04:07:51 crc kubenswrapper[4667]: I0131 04:07:51.325202 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b895b5785-qpl4b"] Jan 31 04:07:51 crc kubenswrapper[4667]: I0131 04:07:51.703123 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"847c3d86-2c1d-4b19-9558-5c03c65e1539","Type":"ContainerStarted","Data":"5abe44b586c1affd5ff29b87b7ae62a7cfc7581d04c5bc70078b4f6f38c68935"} Jan 31 04:07:51 crc kubenswrapper[4667]: I0131 04:07:51.715408 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-666d645645-4kb44"] Jan 31 04:07:51 crc kubenswrapper[4667]: E0131 04:07:51.715897 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b6bac61-1103-438b-9e75-f3d6b6902270" containerName="barbican-db-sync" Jan 31 04:07:51 crc kubenswrapper[4667]: I0131 04:07:51.715910 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b6bac61-1103-438b-9e75-f3d6b6902270" containerName="barbican-db-sync" Jan 31 04:07:51 crc kubenswrapper[4667]: I0131 04:07:51.716085 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b6bac61-1103-438b-9e75-f3d6b6902270" containerName="barbican-db-sync" Jan 31 04:07:51 crc kubenswrapper[4667]: I0131 04:07:51.717064 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-666d645645-4kb44" Jan 31 04:07:51 crc kubenswrapper[4667]: I0131 04:07:51.727690 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"29617aa3-f0e0-4528-9ba6-1385314227d9","Type":"ContainerStarted","Data":"f124111ecc20f94e6d850fbf1512e16fa4beecee0c740f82f0103d8a68f1adb2"} Jan 31 04:07:51 crc kubenswrapper[4667]: I0131 04:07:51.746449 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b895b5785-qpl4b" event={"ID":"6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2","Type":"ContainerStarted","Data":"50f8db8655a7b60928d1678914b5cdd787ca94322b38bb87e2ec9ca4b9c19438"} Jan 31 04:07:51 crc kubenswrapper[4667]: I0131 04:07:51.748239 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c175848a-4645-42e7-8ccc-ab873e1ff7aa","Type":"ContainerStarted","Data":"01eb1079afd2af8f3389078687891d730c75bad84b6638edecff44816261ec2e"} Jan 31 04:07:51 crc kubenswrapper[4667]: I0131 04:07:51.748449 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c175848a-4645-42e7-8ccc-ab873e1ff7aa" containerName="ceilometer-notification-agent" containerID="cri-o://5f3054a5c6f2254b318f9d5a214799bad34be30fa6a4ea2f5072c54c61f95f3b" gracePeriod=30 Jan 31 04:07:51 crc kubenswrapper[4667]: I0131 04:07:51.748750 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 04:07:51 crc kubenswrapper[4667]: I0131 04:07:51.748802 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c175848a-4645-42e7-8ccc-ab873e1ff7aa" containerName="proxy-httpd" containerID="cri-o://01eb1079afd2af8f3389078687891d730c75bad84b6638edecff44816261ec2e" gracePeriod=30 Jan 31 04:07:51 crc kubenswrapper[4667]: I0131 04:07:51.748870 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c175848a-4645-42e7-8ccc-ab873e1ff7aa" containerName="sg-core" containerID="cri-o://3f956df323dcf5ea513fbf4fca63c5b6c48b46d1d34cec9e0da1d18d570c1f71" gracePeriod=30 Jan 31 04:07:51 crc kubenswrapper[4667]: I0131 04:07:51.755353 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-78789d8f44-5trmc" podUID="b7f8fd18-06a0-432e-8c17-c9b432b6ca69" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 31 04:07:51 crc kubenswrapper[4667]: I0131 04:07:51.759442 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 31 04:07:51 crc kubenswrapper[4667]: I0131 04:07:51.759643 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 31 04:07:51 crc kubenswrapper[4667]: I0131 04:07:51.759749 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-wnb99" Jan 31 04:07:51 crc kubenswrapper[4667]: I0131 04:07:51.792610 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-666d645645-4kb44"] Jan 31 04:07:51 crc kubenswrapper[4667]: I0131 04:07:51.848489 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-86c748c4d6-2grmh" podUID="c6974567-3bea-447a-bb8b-ced22b6d34ce" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Jan 31 04:07:51 crc kubenswrapper[4667]: I0131 04:07:51.869216 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbmf8\" (UniqueName: \"kubernetes.io/projected/efc85fb0-e1c4-4a14-aeeb-a0526ff668d1-kube-api-access-kbmf8\") pod \"barbican-worker-666d645645-4kb44\" (UID: \"efc85fb0-e1c4-4a14-aeeb-a0526ff668d1\") " pod="openstack/barbican-worker-666d645645-4kb44" Jan 31 04:07:51 crc kubenswrapper[4667]: I0131 04:07:51.869382 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efc85fb0-e1c4-4a14-aeeb-a0526ff668d1-logs\") pod \"barbican-worker-666d645645-4kb44\" (UID: \"efc85fb0-e1c4-4a14-aeeb-a0526ff668d1\") " pod="openstack/barbican-worker-666d645645-4kb44" Jan 31 04:07:51 crc kubenswrapper[4667]: I0131 04:07:51.869622 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/efc85fb0-e1c4-4a14-aeeb-a0526ff668d1-config-data-custom\") pod \"barbican-worker-666d645645-4kb44\" (UID: \"efc85fb0-e1c4-4a14-aeeb-a0526ff668d1\") " pod="openstack/barbican-worker-666d645645-4kb44" Jan 31 04:07:51 crc kubenswrapper[4667]: I0131 04:07:51.869743 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efc85fb0-e1c4-4a14-aeeb-a0526ff668d1-config-data\") pod \"barbican-worker-666d645645-4kb44\" (UID: \"efc85fb0-e1c4-4a14-aeeb-a0526ff668d1\") " pod="openstack/barbican-worker-666d645645-4kb44" Jan 31 04:07:51 crc kubenswrapper[4667]: I0131 04:07:51.869767 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efc85fb0-e1c4-4a14-aeeb-a0526ff668d1-combined-ca-bundle\") pod \"barbican-worker-666d645645-4kb44\" (UID: \"efc85fb0-e1c4-4a14-aeeb-a0526ff668d1\") " pod="openstack/barbican-worker-666d645645-4kb44" Jan 31 04:07:51 crc kubenswrapper[4667]: I0131 04:07:51.939937 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5d8b947646-tj8c8"] Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:51.968958 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5d8b947646-tj8c8" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:51.979081 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5d8b947646-tj8c8"] Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:51.985691 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:51.987729 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efc85fb0-e1c4-4a14-aeeb-a0526ff668d1-config-data\") pod \"barbican-worker-666d645645-4kb44\" (UID: \"efc85fb0-e1c4-4a14-aeeb-a0526ff668d1\") " pod="openstack/barbican-worker-666d645645-4kb44" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:51.987763 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efc85fb0-e1c4-4a14-aeeb-a0526ff668d1-combined-ca-bundle\") pod \"barbican-worker-666d645645-4kb44\" (UID: \"efc85fb0-e1c4-4a14-aeeb-a0526ff668d1\") " pod="openstack/barbican-worker-666d645645-4kb44" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:51.987812 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbmf8\" (UniqueName: \"kubernetes.io/projected/efc85fb0-e1c4-4a14-aeeb-a0526ff668d1-kube-api-access-kbmf8\") pod \"barbican-worker-666d645645-4kb44\" (UID: \"efc85fb0-e1c4-4a14-aeeb-a0526ff668d1\") " pod="openstack/barbican-worker-666d645645-4kb44" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:51.992125 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efc85fb0-e1c4-4a14-aeeb-a0526ff668d1-logs\") pod \"barbican-worker-666d645645-4kb44\" (UID: \"efc85fb0-e1c4-4a14-aeeb-a0526ff668d1\") " pod="openstack/barbican-worker-666d645645-4kb44" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:51.992260 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/efc85fb0-e1c4-4a14-aeeb-a0526ff668d1-config-data-custom\") pod \"barbican-worker-666d645645-4kb44\" (UID: \"efc85fb0-e1c4-4a14-aeeb-a0526ff668d1\") " pod="openstack/barbican-worker-666d645645-4kb44" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.040436 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efc85fb0-e1c4-4a14-aeeb-a0526ff668d1-logs\") pod \"barbican-worker-666d645645-4kb44\" (UID: \"efc85fb0-e1c4-4a14-aeeb-a0526ff668d1\") " pod="openstack/barbican-worker-666d645645-4kb44" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.045644 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efc85fb0-e1c4-4a14-aeeb-a0526ff668d1-combined-ca-bundle\") pod \"barbican-worker-666d645645-4kb44\" (UID: \"efc85fb0-e1c4-4a14-aeeb-a0526ff668d1\") " pod="openstack/barbican-worker-666d645645-4kb44" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.049777 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efc85fb0-e1c4-4a14-aeeb-a0526ff668d1-config-data\") pod \"barbican-worker-666d645645-4kb44\" (UID: \"efc85fb0-e1c4-4a14-aeeb-a0526ff668d1\") " pod="openstack/barbican-worker-666d645645-4kb44" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.056180 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/efc85fb0-e1c4-4a14-aeeb-a0526ff668d1-config-data-custom\") pod \"barbican-worker-666d645645-4kb44\" (UID: \"efc85fb0-e1c4-4a14-aeeb-a0526ff668d1\") " pod="openstack/barbican-worker-666d645645-4kb44" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.064802 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbmf8\" (UniqueName: \"kubernetes.io/projected/efc85fb0-e1c4-4a14-aeeb-a0526ff668d1-kube-api-access-kbmf8\") pod \"barbican-worker-666d645645-4kb44\" (UID: \"efc85fb0-e1c4-4a14-aeeb-a0526ff668d1\") " pod="openstack/barbican-worker-666d645645-4kb44" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.104100 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6848ab0-06c2-4eed-9c5e-a1e205da260a-config-data-custom\") pod \"barbican-keystone-listener-5d8b947646-tj8c8\" (UID: \"c6848ab0-06c2-4eed-9c5e-a1e205da260a\") " pod="openstack/barbican-keystone-listener-5d8b947646-tj8c8" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.104211 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xhj8\" (UniqueName: \"kubernetes.io/projected/c6848ab0-06c2-4eed-9c5e-a1e205da260a-kube-api-access-7xhj8\") pod \"barbican-keystone-listener-5d8b947646-tj8c8\" (UID: \"c6848ab0-06c2-4eed-9c5e-a1e205da260a\") " pod="openstack/barbican-keystone-listener-5d8b947646-tj8c8" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.104364 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6848ab0-06c2-4eed-9c5e-a1e205da260a-logs\") pod \"barbican-keystone-listener-5d8b947646-tj8c8\" (UID: \"c6848ab0-06c2-4eed-9c5e-a1e205da260a\") " pod="openstack/barbican-keystone-listener-5d8b947646-tj8c8" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.104540 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6848ab0-06c2-4eed-9c5e-a1e205da260a-config-data\") pod \"barbican-keystone-listener-5d8b947646-tj8c8\" (UID: \"c6848ab0-06c2-4eed-9c5e-a1e205da260a\") " pod="openstack/barbican-keystone-listener-5d8b947646-tj8c8" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.104572 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6848ab0-06c2-4eed-9c5e-a1e205da260a-combined-ca-bundle\") pod \"barbican-keystone-listener-5d8b947646-tj8c8\" (UID: \"c6848ab0-06c2-4eed-9c5e-a1e205da260a\") " pod="openstack/barbican-keystone-listener-5d8b947646-tj8c8" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.162947 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b895b5785-qpl4b"] Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.195652 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-666d645645-4kb44" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.211262 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6848ab0-06c2-4eed-9c5e-a1e205da260a-logs\") pod \"barbican-keystone-listener-5d8b947646-tj8c8\" (UID: \"c6848ab0-06c2-4eed-9c5e-a1e205da260a\") " pod="openstack/barbican-keystone-listener-5d8b947646-tj8c8" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.211379 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6848ab0-06c2-4eed-9c5e-a1e205da260a-config-data\") pod \"barbican-keystone-listener-5d8b947646-tj8c8\" (UID: \"c6848ab0-06c2-4eed-9c5e-a1e205da260a\") " pod="openstack/barbican-keystone-listener-5d8b947646-tj8c8" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.211412 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6848ab0-06c2-4eed-9c5e-a1e205da260a-combined-ca-bundle\") pod \"barbican-keystone-listener-5d8b947646-tj8c8\" (UID: \"c6848ab0-06c2-4eed-9c5e-a1e205da260a\") " pod="openstack/barbican-keystone-listener-5d8b947646-tj8c8" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.211474 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6848ab0-06c2-4eed-9c5e-a1e205da260a-config-data-custom\") pod \"barbican-keystone-listener-5d8b947646-tj8c8\" (UID: \"c6848ab0-06c2-4eed-9c5e-a1e205da260a\") " pod="openstack/barbican-keystone-listener-5d8b947646-tj8c8" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.211519 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xhj8\" (UniqueName: \"kubernetes.io/projected/c6848ab0-06c2-4eed-9c5e-a1e205da260a-kube-api-access-7xhj8\") pod \"barbican-keystone-listener-5d8b947646-tj8c8\" (UID: \"c6848ab0-06c2-4eed-9c5e-a1e205da260a\") " pod="openstack/barbican-keystone-listener-5d8b947646-tj8c8" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.212496 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6848ab0-06c2-4eed-9c5e-a1e205da260a-logs\") pod \"barbican-keystone-listener-5d8b947646-tj8c8\" (UID: \"c6848ab0-06c2-4eed-9c5e-a1e205da260a\") " pod="openstack/barbican-keystone-listener-5d8b947646-tj8c8" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.260417 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-l6h6r"] Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.265709 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-l6h6r" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.276177 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xhj8\" (UniqueName: \"kubernetes.io/projected/c6848ab0-06c2-4eed-9c5e-a1e205da260a-kube-api-access-7xhj8\") pod \"barbican-keystone-listener-5d8b947646-tj8c8\" (UID: \"c6848ab0-06c2-4eed-9c5e-a1e205da260a\") " pod="openstack/barbican-keystone-listener-5d8b947646-tj8c8" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.284591 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6848ab0-06c2-4eed-9c5e-a1e205da260a-config-data-custom\") pod \"barbican-keystone-listener-5d8b947646-tj8c8\" (UID: \"c6848ab0-06c2-4eed-9c5e-a1e205da260a\") " pod="openstack/barbican-keystone-listener-5d8b947646-tj8c8" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.289799 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6848ab0-06c2-4eed-9c5e-a1e205da260a-combined-ca-bundle\") pod \"barbican-keystone-listener-5d8b947646-tj8c8\" (UID: \"c6848ab0-06c2-4eed-9c5e-a1e205da260a\") " pod="openstack/barbican-keystone-listener-5d8b947646-tj8c8" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.303908 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6848ab0-06c2-4eed-9c5e-a1e205da260a-config-data\") pod \"barbican-keystone-listener-5d8b947646-tj8c8\" (UID: \"c6848ab0-06c2-4eed-9c5e-a1e205da260a\") " pod="openstack/barbican-keystone-listener-5d8b947646-tj8c8" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.319108 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/899eb625-8bc5-458d-88f4-cc8ccc4bd261-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-l6h6r\" (UID: \"899eb625-8bc5-458d-88f4-cc8ccc4bd261\") " pod="openstack/dnsmasq-dns-5c9776ccc5-l6h6r" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.319153 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6gvc\" (UniqueName: \"kubernetes.io/projected/899eb625-8bc5-458d-88f4-cc8ccc4bd261-kube-api-access-w6gvc\") pod \"dnsmasq-dns-5c9776ccc5-l6h6r\" (UID: \"899eb625-8bc5-458d-88f4-cc8ccc4bd261\") " pod="openstack/dnsmasq-dns-5c9776ccc5-l6h6r" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.319201 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/899eb625-8bc5-458d-88f4-cc8ccc4bd261-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-l6h6r\" (UID: \"899eb625-8bc5-458d-88f4-cc8ccc4bd261\") " pod="openstack/dnsmasq-dns-5c9776ccc5-l6h6r" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.319226 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/899eb625-8bc5-458d-88f4-cc8ccc4bd261-config\") pod \"dnsmasq-dns-5c9776ccc5-l6h6r\" (UID: \"899eb625-8bc5-458d-88f4-cc8ccc4bd261\") " pod="openstack/dnsmasq-dns-5c9776ccc5-l6h6r" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.319254 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/899eb625-8bc5-458d-88f4-cc8ccc4bd261-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-l6h6r\" (UID: \"899eb625-8bc5-458d-88f4-cc8ccc4bd261\") " pod="openstack/dnsmasq-dns-5c9776ccc5-l6h6r" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.319321 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/899eb625-8bc5-458d-88f4-cc8ccc4bd261-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-l6h6r\" (UID: \"899eb625-8bc5-458d-88f4-cc8ccc4bd261\") " pod="openstack/dnsmasq-dns-5c9776ccc5-l6h6r" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.321910 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-l6h6r"] Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.361949 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5d8b947646-tj8c8" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.395490 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6fb8dc74db-tdj6x"] Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.397440 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6fb8dc74db-tdj6x" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.403128 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.411426 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6fb8dc74db-tdj6x"] Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.421748 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/899eb625-8bc5-458d-88f4-cc8ccc4bd261-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-l6h6r\" (UID: \"899eb625-8bc5-458d-88f4-cc8ccc4bd261\") " pod="openstack/dnsmasq-dns-5c9776ccc5-l6h6r" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.421990 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6gvc\" (UniqueName: \"kubernetes.io/projected/899eb625-8bc5-458d-88f4-cc8ccc4bd261-kube-api-access-w6gvc\") pod \"dnsmasq-dns-5c9776ccc5-l6h6r\" (UID: \"899eb625-8bc5-458d-88f4-cc8ccc4bd261\") " pod="openstack/dnsmasq-dns-5c9776ccc5-l6h6r" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.422101 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/899eb625-8bc5-458d-88f4-cc8ccc4bd261-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-l6h6r\" (UID: \"899eb625-8bc5-458d-88f4-cc8ccc4bd261\") " pod="openstack/dnsmasq-dns-5c9776ccc5-l6h6r" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.422205 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/899eb625-8bc5-458d-88f4-cc8ccc4bd261-config\") pod \"dnsmasq-dns-5c9776ccc5-l6h6r\" (UID: \"899eb625-8bc5-458d-88f4-cc8ccc4bd261\") " pod="openstack/dnsmasq-dns-5c9776ccc5-l6h6r" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.422292 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/899eb625-8bc5-458d-88f4-cc8ccc4bd261-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-l6h6r\" (UID: \"899eb625-8bc5-458d-88f4-cc8ccc4bd261\") " pod="openstack/dnsmasq-dns-5c9776ccc5-l6h6r" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.422404 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/899eb625-8bc5-458d-88f4-cc8ccc4bd261-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-l6h6r\" (UID: \"899eb625-8bc5-458d-88f4-cc8ccc4bd261\") " pod="openstack/dnsmasq-dns-5c9776ccc5-l6h6r" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.423373 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/899eb625-8bc5-458d-88f4-cc8ccc4bd261-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-l6h6r\" (UID: \"899eb625-8bc5-458d-88f4-cc8ccc4bd261\") " pod="openstack/dnsmasq-dns-5c9776ccc5-l6h6r" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.426304 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/899eb625-8bc5-458d-88f4-cc8ccc4bd261-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-l6h6r\" (UID: \"899eb625-8bc5-458d-88f4-cc8ccc4bd261\") " pod="openstack/dnsmasq-dns-5c9776ccc5-l6h6r" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.427484 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/899eb625-8bc5-458d-88f4-cc8ccc4bd261-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-l6h6r\" (UID: \"899eb625-8bc5-458d-88f4-cc8ccc4bd261\") " pod="openstack/dnsmasq-dns-5c9776ccc5-l6h6r" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.429004 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/899eb625-8bc5-458d-88f4-cc8ccc4bd261-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-l6h6r\" (UID: \"899eb625-8bc5-458d-88f4-cc8ccc4bd261\") " pod="openstack/dnsmasq-dns-5c9776ccc5-l6h6r" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.429576 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/899eb625-8bc5-458d-88f4-cc8ccc4bd261-config\") pod \"dnsmasq-dns-5c9776ccc5-l6h6r\" (UID: \"899eb625-8bc5-458d-88f4-cc8ccc4bd261\") " pod="openstack/dnsmasq-dns-5c9776ccc5-l6h6r" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.471789 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6gvc\" (UniqueName: \"kubernetes.io/projected/899eb625-8bc5-458d-88f4-cc8ccc4bd261-kube-api-access-w6gvc\") pod \"dnsmasq-dns-5c9776ccc5-l6h6r\" (UID: \"899eb625-8bc5-458d-88f4-cc8ccc4bd261\") " pod="openstack/dnsmasq-dns-5c9776ccc5-l6h6r" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.526095 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/707e0230-af22-42d8-9d59-8ea928b3178c-combined-ca-bundle\") pod \"barbican-api-6fb8dc74db-tdj6x\" (UID: \"707e0230-af22-42d8-9d59-8ea928b3178c\") " pod="openstack/barbican-api-6fb8dc74db-tdj6x" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.526167 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/707e0230-af22-42d8-9d59-8ea928b3178c-logs\") pod \"barbican-api-6fb8dc74db-tdj6x\" (UID: \"707e0230-af22-42d8-9d59-8ea928b3178c\") " pod="openstack/barbican-api-6fb8dc74db-tdj6x" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.526246 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9gkd\" (UniqueName: \"kubernetes.io/projected/707e0230-af22-42d8-9d59-8ea928b3178c-kube-api-access-v9gkd\") pod \"barbican-api-6fb8dc74db-tdj6x\" (UID: \"707e0230-af22-42d8-9d59-8ea928b3178c\") " pod="openstack/barbican-api-6fb8dc74db-tdj6x" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.526359 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/707e0230-af22-42d8-9d59-8ea928b3178c-config-data-custom\") pod \"barbican-api-6fb8dc74db-tdj6x\" (UID: \"707e0230-af22-42d8-9d59-8ea928b3178c\") " pod="openstack/barbican-api-6fb8dc74db-tdj6x" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.526394 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/707e0230-af22-42d8-9d59-8ea928b3178c-config-data\") pod \"barbican-api-6fb8dc74db-tdj6x\" (UID: \"707e0230-af22-42d8-9d59-8ea928b3178c\") " pod="openstack/barbican-api-6fb8dc74db-tdj6x" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.567237 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.628338 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9gkd\" (UniqueName: \"kubernetes.io/projected/707e0230-af22-42d8-9d59-8ea928b3178c-kube-api-access-v9gkd\") pod \"barbican-api-6fb8dc74db-tdj6x\" (UID: \"707e0230-af22-42d8-9d59-8ea928b3178c\") " pod="openstack/barbican-api-6fb8dc74db-tdj6x" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.628808 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/707e0230-af22-42d8-9d59-8ea928b3178c-config-data-custom\") pod \"barbican-api-6fb8dc74db-tdj6x\" (UID: \"707e0230-af22-42d8-9d59-8ea928b3178c\") " pod="openstack/barbican-api-6fb8dc74db-tdj6x" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.628937 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/707e0230-af22-42d8-9d59-8ea928b3178c-config-data\") pod \"barbican-api-6fb8dc74db-tdj6x\" (UID: \"707e0230-af22-42d8-9d59-8ea928b3178c\") " pod="openstack/barbican-api-6fb8dc74db-tdj6x" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.628978 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/707e0230-af22-42d8-9d59-8ea928b3178c-combined-ca-bundle\") pod \"barbican-api-6fb8dc74db-tdj6x\" (UID: \"707e0230-af22-42d8-9d59-8ea928b3178c\") " pod="openstack/barbican-api-6fb8dc74db-tdj6x" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.629005 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/707e0230-af22-42d8-9d59-8ea928b3178c-logs\") pod \"barbican-api-6fb8dc74db-tdj6x\" (UID: \"707e0230-af22-42d8-9d59-8ea928b3178c\") " pod="openstack/barbican-api-6fb8dc74db-tdj6x" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.630090 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/707e0230-af22-42d8-9d59-8ea928b3178c-logs\") pod \"barbican-api-6fb8dc74db-tdj6x\" (UID: \"707e0230-af22-42d8-9d59-8ea928b3178c\") " pod="openstack/barbican-api-6fb8dc74db-tdj6x" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.635178 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/707e0230-af22-42d8-9d59-8ea928b3178c-combined-ca-bundle\") pod \"barbican-api-6fb8dc74db-tdj6x\" (UID: \"707e0230-af22-42d8-9d59-8ea928b3178c\") " pod="openstack/barbican-api-6fb8dc74db-tdj6x" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.640424 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/707e0230-af22-42d8-9d59-8ea928b3178c-config-data-custom\") pod \"barbican-api-6fb8dc74db-tdj6x\" (UID: \"707e0230-af22-42d8-9d59-8ea928b3178c\") " pod="openstack/barbican-api-6fb8dc74db-tdj6x" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.641486 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/707e0230-af22-42d8-9d59-8ea928b3178c-config-data\") pod \"barbican-api-6fb8dc74db-tdj6x\" (UID: \"707e0230-af22-42d8-9d59-8ea928b3178c\") " pod="openstack/barbican-api-6fb8dc74db-tdj6x" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.652662 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9gkd\" (UniqueName: \"kubernetes.io/projected/707e0230-af22-42d8-9d59-8ea928b3178c-kube-api-access-v9gkd\") pod \"barbican-api-6fb8dc74db-tdj6x\" (UID: \"707e0230-af22-42d8-9d59-8ea928b3178c\") " pod="openstack/barbican-api-6fb8dc74db-tdj6x" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.695062 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-l6h6r" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.783403 4667 generic.go:334] "Generic (PLEG): container finished" podID="6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2" containerID="08879eefbbdaa4345d895621f6333cf56e632a4aea1e840ef6190d445e782463" exitCode=0 Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.783508 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b895b5785-qpl4b" event={"ID":"6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2","Type":"ContainerDied","Data":"08879eefbbdaa4345d895621f6333cf56e632a4aea1e840ef6190d445e782463"} Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.787780 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6fb8dc74db-tdj6x" Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.819860 4667 generic.go:334] "Generic (PLEG): container finished" podID="c175848a-4645-42e7-8ccc-ab873e1ff7aa" containerID="3f956df323dcf5ea513fbf4fca63c5b6c48b46d1d34cec9e0da1d18d570c1f71" exitCode=2 Jan 31 04:07:52 crc kubenswrapper[4667]: I0131 04:07:52.819949 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c175848a-4645-42e7-8ccc-ab873e1ff7aa","Type":"ContainerDied","Data":"3f956df323dcf5ea513fbf4fca63c5b6c48b46d1d34cec9e0da1d18d570c1f71"} Jan 31 04:07:53 crc kubenswrapper[4667]: I0131 04:07:53.523494 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7dc9f74cdf-w757n" Jan 31 04:07:53 crc kubenswrapper[4667]: I0131 04:07:53.642636 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-746c944c96-t4g84"] Jan 31 04:07:53 crc kubenswrapper[4667]: I0131 04:07:53.643280 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-746c944c96-t4g84" podUID="43842154-1666-491b-b37a-061c1a7c2b90" containerName="neutron-api" containerID="cri-o://15267c0b3933699e7d571eb96c51d917a3ecf248dccdd25d9641dad4133590cc" gracePeriod=30 Jan 31 04:07:53 crc kubenswrapper[4667]: I0131 04:07:53.882281 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"29617aa3-f0e0-4528-9ba6-1385314227d9","Type":"ContainerStarted","Data":"209f03fa0ee39240735a4626e46b8eee5a7d4acbd72c037b10d1abe6f27f2cad"} Jan 31 04:07:53 crc kubenswrapper[4667]: I0131 04:07:53.900793 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b895b5785-qpl4b" Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.070234 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7f55cc74b5-gg8dl"] Jan 31 04:07:54 crc kubenswrapper[4667]: E0131 04:07:54.070672 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2" containerName="init" Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.070690 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2" containerName="init" Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.071000 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2" containerName="init" Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.077855 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7f55cc74b5-gg8dl" Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.085365 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2-ovsdbserver-sb\") pod \"6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2\" (UID: \"6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2\") " Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.085625 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2-ovsdbserver-nb\") pod \"6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2\" (UID: \"6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2\") " Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.085755 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2-dns-svc\") pod \"6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2\" (UID: \"6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2\") " Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.085867 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgfgp\" (UniqueName: \"kubernetes.io/projected/6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2-kube-api-access-rgfgp\") pod \"6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2\" (UID: \"6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2\") " Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.085967 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2-dns-swift-storage-0\") pod \"6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2\" (UID: \"6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2\") " Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.086137 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2-config\") pod \"6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2\" (UID: \"6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2\") " Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.136180 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2-kube-api-access-rgfgp" (OuterVolumeSpecName: "kube-api-access-rgfgp") pod "6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2" (UID: "6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2"). InnerVolumeSpecName "kube-api-access-rgfgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.170084 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5d8b947646-tj8c8"] Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.191521 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/48966487-81e5-4e5d-9a74-fbbf2b1091ae-ovndb-tls-certs\") pod \"neutron-7f55cc74b5-gg8dl\" (UID: \"48966487-81e5-4e5d-9a74-fbbf2b1091ae\") " pod="openstack/neutron-7f55cc74b5-gg8dl" Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.191598 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/48966487-81e5-4e5d-9a74-fbbf2b1091ae-internal-tls-certs\") pod \"neutron-7f55cc74b5-gg8dl\" (UID: \"48966487-81e5-4e5d-9a74-fbbf2b1091ae\") " pod="openstack/neutron-7f55cc74b5-gg8dl" Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.191643 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/48966487-81e5-4e5d-9a74-fbbf2b1091ae-public-tls-certs\") pod \"neutron-7f55cc74b5-gg8dl\" (UID: \"48966487-81e5-4e5d-9a74-fbbf2b1091ae\") " pod="openstack/neutron-7f55cc74b5-gg8dl" Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.191661 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/48966487-81e5-4e5d-9a74-fbbf2b1091ae-httpd-config\") pod \"neutron-7f55cc74b5-gg8dl\" (UID: \"48966487-81e5-4e5d-9a74-fbbf2b1091ae\") " pod="openstack/neutron-7f55cc74b5-gg8dl" Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.191704 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48966487-81e5-4e5d-9a74-fbbf2b1091ae-combined-ca-bundle\") pod \"neutron-7f55cc74b5-gg8dl\" (UID: \"48966487-81e5-4e5d-9a74-fbbf2b1091ae\") " pod="openstack/neutron-7f55cc74b5-gg8dl" Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.191781 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/48966487-81e5-4e5d-9a74-fbbf2b1091ae-config\") pod \"neutron-7f55cc74b5-gg8dl\" (UID: \"48966487-81e5-4e5d-9a74-fbbf2b1091ae\") " pod="openstack/neutron-7f55cc74b5-gg8dl" Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.191812 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbllw\" (UniqueName: \"kubernetes.io/projected/48966487-81e5-4e5d-9a74-fbbf2b1091ae-kube-api-access-mbllw\") pod \"neutron-7f55cc74b5-gg8dl\" (UID: \"48966487-81e5-4e5d-9a74-fbbf2b1091ae\") " pod="openstack/neutron-7f55cc74b5-gg8dl" Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.191918 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgfgp\" (UniqueName: \"kubernetes.io/projected/6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2-kube-api-access-rgfgp\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.238303 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2" (UID: "6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.332400 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/48966487-81e5-4e5d-9a74-fbbf2b1091ae-config\") pod \"neutron-7f55cc74b5-gg8dl\" (UID: \"48966487-81e5-4e5d-9a74-fbbf2b1091ae\") " pod="openstack/neutron-7f55cc74b5-gg8dl" Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.366258 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbllw\" (UniqueName: \"kubernetes.io/projected/48966487-81e5-4e5d-9a74-fbbf2b1091ae-kube-api-access-mbllw\") pod \"neutron-7f55cc74b5-gg8dl\" (UID: \"48966487-81e5-4e5d-9a74-fbbf2b1091ae\") " pod="openstack/neutron-7f55cc74b5-gg8dl" Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.366508 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/48966487-81e5-4e5d-9a74-fbbf2b1091ae-ovndb-tls-certs\") pod \"neutron-7f55cc74b5-gg8dl\" (UID: \"48966487-81e5-4e5d-9a74-fbbf2b1091ae\") " pod="openstack/neutron-7f55cc74b5-gg8dl" Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.366554 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/48966487-81e5-4e5d-9a74-fbbf2b1091ae-internal-tls-certs\") pod \"neutron-7f55cc74b5-gg8dl\" (UID: \"48966487-81e5-4e5d-9a74-fbbf2b1091ae\") " pod="openstack/neutron-7f55cc74b5-gg8dl" Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.366653 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/48966487-81e5-4e5d-9a74-fbbf2b1091ae-public-tls-certs\") pod \"neutron-7f55cc74b5-gg8dl\" (UID: \"48966487-81e5-4e5d-9a74-fbbf2b1091ae\") " pod="openstack/neutron-7f55cc74b5-gg8dl" Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.366689 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/48966487-81e5-4e5d-9a74-fbbf2b1091ae-httpd-config\") pod \"neutron-7f55cc74b5-gg8dl\" (UID: \"48966487-81e5-4e5d-9a74-fbbf2b1091ae\") " pod="openstack/neutron-7f55cc74b5-gg8dl" Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.366794 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48966487-81e5-4e5d-9a74-fbbf2b1091ae-combined-ca-bundle\") pod \"neutron-7f55cc74b5-gg8dl\" (UID: \"48966487-81e5-4e5d-9a74-fbbf2b1091ae\") " pod="openstack/neutron-7f55cc74b5-gg8dl" Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.367174 4667 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.377317 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48966487-81e5-4e5d-9a74-fbbf2b1091ae-combined-ca-bundle\") pod \"neutron-7f55cc74b5-gg8dl\" (UID: \"48966487-81e5-4e5d-9a74-fbbf2b1091ae\") " pod="openstack/neutron-7f55cc74b5-gg8dl" Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.381857 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2" (UID: "6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.383586 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/48966487-81e5-4e5d-9a74-fbbf2b1091ae-public-tls-certs\") pod \"neutron-7f55cc74b5-gg8dl\" (UID: \"48966487-81e5-4e5d-9a74-fbbf2b1091ae\") " pod="openstack/neutron-7f55cc74b5-gg8dl" Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.392467 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/48966487-81e5-4e5d-9a74-fbbf2b1091ae-httpd-config\") pod \"neutron-7f55cc74b5-gg8dl\" (UID: \"48966487-81e5-4e5d-9a74-fbbf2b1091ae\") " pod="openstack/neutron-7f55cc74b5-gg8dl" Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.393171 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/48966487-81e5-4e5d-9a74-fbbf2b1091ae-internal-tls-certs\") pod \"neutron-7f55cc74b5-gg8dl\" (UID: \"48966487-81e5-4e5d-9a74-fbbf2b1091ae\") " pod="openstack/neutron-7f55cc74b5-gg8dl" Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.461894 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/48966487-81e5-4e5d-9a74-fbbf2b1091ae-ovndb-tls-certs\") pod \"neutron-7f55cc74b5-gg8dl\" (UID: \"48966487-81e5-4e5d-9a74-fbbf2b1091ae\") " pod="openstack/neutron-7f55cc74b5-gg8dl" Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.463901 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbllw\" (UniqueName: \"kubernetes.io/projected/48966487-81e5-4e5d-9a74-fbbf2b1091ae-kube-api-access-mbllw\") pod \"neutron-7f55cc74b5-gg8dl\" (UID: \"48966487-81e5-4e5d-9a74-fbbf2b1091ae\") " pod="openstack/neutron-7f55cc74b5-gg8dl" Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.470376 4667 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.492168 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7f55cc74b5-gg8dl"] Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.537008 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2-config" (OuterVolumeSpecName: "config") pod "6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2" (UID: "6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.558024 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2" (UID: "6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.558715 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2" (UID: "6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.559246 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/48966487-81e5-4e5d-9a74-fbbf2b1091ae-config\") pod \"neutron-7f55cc74b5-gg8dl\" (UID: \"48966487-81e5-4e5d-9a74-fbbf2b1091ae\") " pod="openstack/neutron-7f55cc74b5-gg8dl" Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.573571 4667 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.573615 4667 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.573626 4667 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.595157 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-666d645645-4kb44"] Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.615350 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-l6h6r"] Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.636091 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6fb8dc74db-tdj6x"] Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.784859 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7f55cc74b5-gg8dl" Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.919514 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-666d645645-4kb44" event={"ID":"efc85fb0-e1c4-4a14-aeeb-a0526ff668d1","Type":"ContainerStarted","Data":"bd6f6c57b0d7c90df212abf0d9228c9603fc43aa2fb8396dc22b4999904b2bc7"} Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.924825 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6fb8dc74db-tdj6x" event={"ID":"707e0230-af22-42d8-9d59-8ea928b3178c","Type":"ContainerStarted","Data":"6b1008f4cbdffc4d5bf5570b1211c92ccfe0a08346d0adb343500f1f74229778"} Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.940274 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-l6h6r" event={"ID":"899eb625-8bc5-458d-88f4-cc8ccc4bd261","Type":"ContainerStarted","Data":"43d90f16af7fa3a8aa01b601cf9dfb44aad45aea7c673c29b789c8f8a44a3cde"} Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.945228 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5d8b947646-tj8c8" event={"ID":"c6848ab0-06c2-4eed-9c5e-a1e205da260a","Type":"ContainerStarted","Data":"062b1266828d3712d4530cb8729919fafbda1580132d9f5d6e71e1ba6a6890fe"} Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.948692 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b895b5785-qpl4b" event={"ID":"6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2","Type":"ContainerDied","Data":"50f8db8655a7b60928d1678914b5cdd787ca94322b38bb87e2ec9ca4b9c19438"} Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.948907 4667 scope.go:117] "RemoveContainer" containerID="08879eefbbdaa4345d895621f6333cf56e632a4aea1e840ef6190d445e782463" Jan 31 04:07:54 crc kubenswrapper[4667]: I0131 04:07:54.949087 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b895b5785-qpl4b" Jan 31 04:07:55 crc kubenswrapper[4667]: I0131 04:07:55.228282 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b895b5785-qpl4b"] Jan 31 04:07:55 crc kubenswrapper[4667]: I0131 04:07:55.255492 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b895b5785-qpl4b"] Jan 31 04:07:55 crc kubenswrapper[4667]: I0131 04:07:55.352311 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2" path="/var/lib/kubelet/pods/6b2ccb6d-deb7-412d-b14a-0ec18b3ba9f2/volumes" Jan 31 04:07:55 crc kubenswrapper[4667]: I0131 04:07:55.512797 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7f55cc74b5-gg8dl"] Jan 31 04:07:55 crc kubenswrapper[4667]: W0131 04:07:55.556910 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48966487_81e5_4e5d_9a74_fbbf2b1091ae.slice/crio-d198b534f3d13e1309c30119cef727f206c7c7b302789435fadc0c8016847f52 WatchSource:0}: Error finding container d198b534f3d13e1309c30119cef727f206c7c7b302789435fadc0c8016847f52: Status 404 returned error can't find the container with id d198b534f3d13e1309c30119cef727f206c7c7b302789435fadc0c8016847f52 Jan 31 04:07:55 crc kubenswrapper[4667]: I0131 04:07:55.995449 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"847c3d86-2c1d-4b19-9558-5c03c65e1539","Type":"ContainerStarted","Data":"45ba0f5bf2e9664f1fa0c323e72ce0181803c9fa89cafd2f96e094f86680a277"} Jan 31 04:07:56 crc kubenswrapper[4667]: I0131 04:07:56.003409 4667 generic.go:334] "Generic (PLEG): container finished" podID="899eb625-8bc5-458d-88f4-cc8ccc4bd261" containerID="5c1ff677289b4ede3000837cdef077d10eaa071ca64adc07ac9fa8a9270a5165" exitCode=0 Jan 31 04:07:56 crc kubenswrapper[4667]: I0131 04:07:56.003480 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-l6h6r" event={"ID":"899eb625-8bc5-458d-88f4-cc8ccc4bd261","Type":"ContainerDied","Data":"5c1ff677289b4ede3000837cdef077d10eaa071ca64adc07ac9fa8a9270a5165"} Jan 31 04:07:56 crc kubenswrapper[4667]: I0131 04:07:56.011959 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f55cc74b5-gg8dl" event={"ID":"48966487-81e5-4e5d-9a74-fbbf2b1091ae","Type":"ContainerStarted","Data":"d198b534f3d13e1309c30119cef727f206c7c7b302789435fadc0c8016847f52"} Jan 31 04:07:56 crc kubenswrapper[4667]: I0131 04:07:56.047812 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6fb8dc74db-tdj6x" event={"ID":"707e0230-af22-42d8-9d59-8ea928b3178c","Type":"ContainerStarted","Data":"a1afd66ffbade368b61e6fdee49de378fa084e0f40cadeac24baeaf05305f8cc"} Jan 31 04:07:56 crc kubenswrapper[4667]: I0131 04:07:56.049625 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6fb8dc74db-tdj6x" Jan 31 04:07:56 crc kubenswrapper[4667]: I0131 04:07:56.049660 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6fb8dc74db-tdj6x" Jan 31 04:07:56 crc kubenswrapper[4667]: I0131 04:07:56.085375 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6fb8dc74db-tdj6x" podStartSLOduration=4.085348638 podStartE2EDuration="4.085348638s" podCreationTimestamp="2026-01-31 04:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:07:56.077018487 +0000 UTC m=+1199.593353796" watchObservedRunningTime="2026-01-31 04:07:56.085348638 +0000 UTC m=+1199.601683937" Jan 31 04:07:57 crc kubenswrapper[4667]: I0131 04:07:56.873405 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7b9d9fcc56-wmjp8"] Jan 31 04:07:57 crc kubenswrapper[4667]: I0131 04:07:56.875489 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b9d9fcc56-wmjp8" Jan 31 04:07:57 crc kubenswrapper[4667]: I0131 04:07:56.879336 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 31 04:07:57 crc kubenswrapper[4667]: I0131 04:07:56.879520 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 31 04:07:57 crc kubenswrapper[4667]: I0131 04:07:57.047535 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8b59858-7b18-4bad-b555-b978f3fbea56-logs\") pod \"barbican-api-7b9d9fcc56-wmjp8\" (UID: \"d8b59858-7b18-4bad-b555-b978f3fbea56\") " pod="openstack/barbican-api-7b9d9fcc56-wmjp8" Jan 31 04:07:57 crc kubenswrapper[4667]: I0131 04:07:57.047634 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6jps\" (UniqueName: \"kubernetes.io/projected/d8b59858-7b18-4bad-b555-b978f3fbea56-kube-api-access-m6jps\") pod \"barbican-api-7b9d9fcc56-wmjp8\" (UID: \"d8b59858-7b18-4bad-b555-b978f3fbea56\") " pod="openstack/barbican-api-7b9d9fcc56-wmjp8" Jan 31 04:07:57 crc kubenswrapper[4667]: I0131 04:07:57.047698 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8b59858-7b18-4bad-b555-b978f3fbea56-config-data-custom\") pod \"barbican-api-7b9d9fcc56-wmjp8\" (UID: \"d8b59858-7b18-4bad-b555-b978f3fbea56\") " pod="openstack/barbican-api-7b9d9fcc56-wmjp8" Jan 31 04:07:57 crc kubenswrapper[4667]: I0131 04:07:57.047725 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8b59858-7b18-4bad-b555-b978f3fbea56-public-tls-certs\") pod \"barbican-api-7b9d9fcc56-wmjp8\" (UID: \"d8b59858-7b18-4bad-b555-b978f3fbea56\") " pod="openstack/barbican-api-7b9d9fcc56-wmjp8" Jan 31 04:07:57 crc kubenswrapper[4667]: I0131 04:07:57.047751 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8b59858-7b18-4bad-b555-b978f3fbea56-config-data\") pod \"barbican-api-7b9d9fcc56-wmjp8\" (UID: \"d8b59858-7b18-4bad-b555-b978f3fbea56\") " pod="openstack/barbican-api-7b9d9fcc56-wmjp8" Jan 31 04:07:57 crc kubenswrapper[4667]: I0131 04:07:57.047786 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8b59858-7b18-4bad-b555-b978f3fbea56-internal-tls-certs\") pod \"barbican-api-7b9d9fcc56-wmjp8\" (UID: \"d8b59858-7b18-4bad-b555-b978f3fbea56\") " pod="openstack/barbican-api-7b9d9fcc56-wmjp8" Jan 31 04:07:57 crc kubenswrapper[4667]: I0131 04:07:57.047826 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8b59858-7b18-4bad-b555-b978f3fbea56-combined-ca-bundle\") pod \"barbican-api-7b9d9fcc56-wmjp8\" (UID: \"d8b59858-7b18-4bad-b555-b978f3fbea56\") " pod="openstack/barbican-api-7b9d9fcc56-wmjp8" Jan 31 04:07:57 crc kubenswrapper[4667]: I0131 04:07:57.156410 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8b59858-7b18-4bad-b555-b978f3fbea56-config-data-custom\") pod \"barbican-api-7b9d9fcc56-wmjp8\" (UID: \"d8b59858-7b18-4bad-b555-b978f3fbea56\") " pod="openstack/barbican-api-7b9d9fcc56-wmjp8" Jan 31 04:07:57 crc kubenswrapper[4667]: I0131 04:07:57.156461 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8b59858-7b18-4bad-b555-b978f3fbea56-public-tls-certs\") pod \"barbican-api-7b9d9fcc56-wmjp8\" (UID: \"d8b59858-7b18-4bad-b555-b978f3fbea56\") " pod="openstack/barbican-api-7b9d9fcc56-wmjp8" Jan 31 04:07:57 crc kubenswrapper[4667]: I0131 04:07:57.156495 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8b59858-7b18-4bad-b555-b978f3fbea56-config-data\") pod \"barbican-api-7b9d9fcc56-wmjp8\" (UID: \"d8b59858-7b18-4bad-b555-b978f3fbea56\") " pod="openstack/barbican-api-7b9d9fcc56-wmjp8" Jan 31 04:07:57 crc kubenswrapper[4667]: I0131 04:07:57.156541 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8b59858-7b18-4bad-b555-b978f3fbea56-internal-tls-certs\") pod \"barbican-api-7b9d9fcc56-wmjp8\" (UID: \"d8b59858-7b18-4bad-b555-b978f3fbea56\") " pod="openstack/barbican-api-7b9d9fcc56-wmjp8" Jan 31 04:07:57 crc kubenswrapper[4667]: I0131 04:07:57.156592 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8b59858-7b18-4bad-b555-b978f3fbea56-combined-ca-bundle\") pod \"barbican-api-7b9d9fcc56-wmjp8\" (UID: \"d8b59858-7b18-4bad-b555-b978f3fbea56\") " pod="openstack/barbican-api-7b9d9fcc56-wmjp8" Jan 31 04:07:57 crc kubenswrapper[4667]: I0131 04:07:57.156648 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8b59858-7b18-4bad-b555-b978f3fbea56-logs\") pod \"barbican-api-7b9d9fcc56-wmjp8\" (UID: \"d8b59858-7b18-4bad-b555-b978f3fbea56\") " pod="openstack/barbican-api-7b9d9fcc56-wmjp8" Jan 31 04:07:57 crc kubenswrapper[4667]: I0131 04:07:57.156706 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6jps\" (UniqueName: \"kubernetes.io/projected/d8b59858-7b18-4bad-b555-b978f3fbea56-kube-api-access-m6jps\") pod \"barbican-api-7b9d9fcc56-wmjp8\" (UID: \"d8b59858-7b18-4bad-b555-b978f3fbea56\") " pod="openstack/barbican-api-7b9d9fcc56-wmjp8" Jan 31 04:07:57 crc kubenswrapper[4667]: I0131 04:07:57.163371 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8b59858-7b18-4bad-b555-b978f3fbea56-config-data-custom\") pod \"barbican-api-7b9d9fcc56-wmjp8\" (UID: \"d8b59858-7b18-4bad-b555-b978f3fbea56\") " pod="openstack/barbican-api-7b9d9fcc56-wmjp8" Jan 31 04:07:57 crc kubenswrapper[4667]: I0131 04:07:57.170952 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8b59858-7b18-4bad-b555-b978f3fbea56-internal-tls-certs\") pod \"barbican-api-7b9d9fcc56-wmjp8\" (UID: \"d8b59858-7b18-4bad-b555-b978f3fbea56\") " pod="openstack/barbican-api-7b9d9fcc56-wmjp8" Jan 31 04:07:57 crc kubenswrapper[4667]: I0131 04:07:57.178037 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8b59858-7b18-4bad-b555-b978f3fbea56-combined-ca-bundle\") pod \"barbican-api-7b9d9fcc56-wmjp8\" (UID: \"d8b59858-7b18-4bad-b555-b978f3fbea56\") " pod="openstack/barbican-api-7b9d9fcc56-wmjp8" Jan 31 04:07:57 crc kubenswrapper[4667]: I0131 04:07:57.178277 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8b59858-7b18-4bad-b555-b978f3fbea56-logs\") pod \"barbican-api-7b9d9fcc56-wmjp8\" (UID: \"d8b59858-7b18-4bad-b555-b978f3fbea56\") " pod="openstack/barbican-api-7b9d9fcc56-wmjp8" Jan 31 04:07:57 crc kubenswrapper[4667]: I0131 04:07:57.222657 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-l6h6r" event={"ID":"899eb625-8bc5-458d-88f4-cc8ccc4bd261","Type":"ContainerStarted","Data":"ca6176e07daa5e1d1933f71e0f5f558937a550af6023da0d0e967c90fabf04a7"} Jan 31 04:07:57 crc kubenswrapper[4667]: I0131 04:07:57.224070 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8b59858-7b18-4bad-b555-b978f3fbea56-config-data\") pod \"barbican-api-7b9d9fcc56-wmjp8\" (UID: \"d8b59858-7b18-4bad-b555-b978f3fbea56\") " pod="openstack/barbican-api-7b9d9fcc56-wmjp8" Jan 31 04:07:57 crc kubenswrapper[4667]: I0131 04:07:57.226824 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-l6h6r" Jan 31 04:07:57 crc kubenswrapper[4667]: I0131 04:07:57.240724 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6jps\" (UniqueName: \"kubernetes.io/projected/d8b59858-7b18-4bad-b555-b978f3fbea56-kube-api-access-m6jps\") pod \"barbican-api-7b9d9fcc56-wmjp8\" (UID: \"d8b59858-7b18-4bad-b555-b978f3fbea56\") " pod="openstack/barbican-api-7b9d9fcc56-wmjp8" Jan 31 04:07:57 crc kubenswrapper[4667]: I0131 04:07:57.241831 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8b59858-7b18-4bad-b555-b978f3fbea56-public-tls-certs\") pod \"barbican-api-7b9d9fcc56-wmjp8\" (UID: \"d8b59858-7b18-4bad-b555-b978f3fbea56\") " pod="openstack/barbican-api-7b9d9fcc56-wmjp8" Jan 31 04:07:57 crc kubenswrapper[4667]: I0131 04:07:57.243774 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"29617aa3-f0e0-4528-9ba6-1385314227d9","Type":"ContainerStarted","Data":"77d88734b6ac59c22b44af3ed81aae7205d959245c4c52fd4bd26209dc3b501f"} Jan 31 04:07:57 crc kubenswrapper[4667]: I0131 04:07:57.244113 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="29617aa3-f0e0-4528-9ba6-1385314227d9" containerName="cinder-api-log" containerID="cri-o://209f03fa0ee39240735a4626e46b8eee5a7d4acbd72c037b10d1abe6f27f2cad" gracePeriod=30 Jan 31 04:07:57 crc kubenswrapper[4667]: I0131 04:07:57.244224 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 31 04:07:57 crc kubenswrapper[4667]: I0131 04:07:57.244272 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="29617aa3-f0e0-4528-9ba6-1385314227d9" containerName="cinder-api" containerID="cri-o://77d88734b6ac59c22b44af3ed81aae7205d959245c4c52fd4bd26209dc3b501f" gracePeriod=30 Jan 31 04:07:57 crc kubenswrapper[4667]: I0131 04:07:57.250148 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7b9d9fcc56-wmjp8"] Jan 31 04:07:57 crc kubenswrapper[4667]: I0131 04:07:57.299546 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b9d9fcc56-wmjp8" Jan 31 04:07:57 crc kubenswrapper[4667]: I0131 04:07:57.363756 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-l6h6r" podStartSLOduration=6.363728644 podStartE2EDuration="6.363728644s" podCreationTimestamp="2026-01-31 04:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:07:57.269414405 +0000 UTC m=+1200.785749714" watchObservedRunningTime="2026-01-31 04:07:57.363728644 +0000 UTC m=+1200.880063943" Jan 31 04:07:57 crc kubenswrapper[4667]: I0131 04:07:57.382563 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=8.382528403 podStartE2EDuration="8.382528403s" podCreationTimestamp="2026-01-31 04:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:07:57.333984636 +0000 UTC m=+1200.850319935" watchObservedRunningTime="2026-01-31 04:07:57.382528403 +0000 UTC m=+1200.898863702" Jan 31 04:07:57 crc kubenswrapper[4667]: I0131 04:07:57.399182 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f55cc74b5-gg8dl" event={"ID":"48966487-81e5-4e5d-9a74-fbbf2b1091ae","Type":"ContainerStarted","Data":"bd420ccea9a856a2a8892c9ccce8c8ee5309c7da177cf83152a1acfecc8006ce"} Jan 31 04:07:57 crc kubenswrapper[4667]: I0131 04:07:57.417881 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6fb8dc74db-tdj6x" event={"ID":"707e0230-af22-42d8-9d59-8ea928b3178c","Type":"ContainerStarted","Data":"85abfdaf0b5e2a34ed14e24152198b322f8ad1b18fc601fa4aa97cd296831c03"} Jan 31 04:07:58 crc kubenswrapper[4667]: I0131 04:07:58.439808 4667 generic.go:334] "Generic (PLEG): container finished" podID="c175848a-4645-42e7-8ccc-ab873e1ff7aa" containerID="5f3054a5c6f2254b318f9d5a214799bad34be30fa6a4ea2f5072c54c61f95f3b" exitCode=0 Jan 31 04:07:58 crc kubenswrapper[4667]: I0131 04:07:58.439938 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c175848a-4645-42e7-8ccc-ab873e1ff7aa","Type":"ContainerDied","Data":"5f3054a5c6f2254b318f9d5a214799bad34be30fa6a4ea2f5072c54c61f95f3b"} Jan 31 04:07:58 crc kubenswrapper[4667]: I0131 04:07:58.442454 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"847c3d86-2c1d-4b19-9558-5c03c65e1539","Type":"ContainerStarted","Data":"7c5e912076e31ea7ada41c76fb183375c6e23b3958824be4a2c20ea2962c8b42"} Jan 31 04:07:58 crc kubenswrapper[4667]: I0131 04:07:58.447947 4667 generic.go:334] "Generic (PLEG): container finished" podID="29617aa3-f0e0-4528-9ba6-1385314227d9" containerID="209f03fa0ee39240735a4626e46b8eee5a7d4acbd72c037b10d1abe6f27f2cad" exitCode=143 Jan 31 04:07:58 crc kubenswrapper[4667]: I0131 04:07:58.448021 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"29617aa3-f0e0-4528-9ba6-1385314227d9","Type":"ContainerDied","Data":"209f03fa0ee39240735a4626e46b8eee5a7d4acbd72c037b10d1abe6f27f2cad"} Jan 31 04:07:58 crc kubenswrapper[4667]: I0131 04:07:58.451706 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f55cc74b5-gg8dl" event={"ID":"48966487-81e5-4e5d-9a74-fbbf2b1091ae","Type":"ContainerStarted","Data":"2b103028730ee23d37f6a886ff7fb5d8f1c70a0ad90b0faebcfac543c4030db3"} Jan 31 04:07:58 crc kubenswrapper[4667]: I0131 04:07:58.451739 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7f55cc74b5-gg8dl" Jan 31 04:07:58 crc kubenswrapper[4667]: I0131 04:07:58.488122 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=8.796376171 podStartE2EDuration="10.488098459s" podCreationTimestamp="2026-01-31 04:07:48 +0000 UTC" firstStartedPulling="2026-01-31 04:07:51.142335414 +0000 UTC m=+1194.658670703" lastFinishedPulling="2026-01-31 04:07:52.834057682 +0000 UTC m=+1196.350392991" observedRunningTime="2026-01-31 04:07:58.481388472 +0000 UTC m=+1201.997723771" watchObservedRunningTime="2026-01-31 04:07:58.488098459 +0000 UTC m=+1202.004433758" Jan 31 04:07:58 crc kubenswrapper[4667]: I0131 04:07:58.511861 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7f55cc74b5-gg8dl" podStartSLOduration=4.511819418 podStartE2EDuration="4.511819418s" podCreationTimestamp="2026-01-31 04:07:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:07:58.511329595 +0000 UTC m=+1202.027664894" watchObservedRunningTime="2026-01-31 04:07:58.511819418 +0000 UTC m=+1202.028154717" Jan 31 04:07:59 crc kubenswrapper[4667]: I0131 04:07:59.253147 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 31 04:08:00 crc kubenswrapper[4667]: I0131 04:08:00.482960 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5d8b947646-tj8c8" event={"ID":"c6848ab0-06c2-4eed-9c5e-a1e205da260a","Type":"ContainerStarted","Data":"0a517069223b737862b8787daaead67f5c8469f5b2ba50d4471275ca12481b2b"} Jan 31 04:08:00 crc kubenswrapper[4667]: I0131 04:08:00.489786 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-666d645645-4kb44" event={"ID":"efc85fb0-e1c4-4a14-aeeb-a0526ff668d1","Type":"ContainerStarted","Data":"2f75ce16a25113344463c44b045cf36cad961bced3bebcbfe9548fe4c5b72047"} Jan 31 04:08:00 crc kubenswrapper[4667]: I0131 04:08:00.508692 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7b9d9fcc56-wmjp8"] Jan 31 04:08:01 crc kubenswrapper[4667]: I0131 04:08:01.502910 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5d8b947646-tj8c8" event={"ID":"c6848ab0-06c2-4eed-9c5e-a1e205da260a","Type":"ContainerStarted","Data":"49edeb3dbdb3da2dcefe49db280f7298a559a00f0269e123fde08e99eadc5270"} Jan 31 04:08:01 crc kubenswrapper[4667]: I0131 04:08:01.504703 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b9d9fcc56-wmjp8" event={"ID":"d8b59858-7b18-4bad-b555-b978f3fbea56","Type":"ContainerStarted","Data":"5669a870832d35e2bd2d4b9acde5da22cd9182f99bb17a4cb7eedc338e4bd312"} Jan 31 04:08:01 crc kubenswrapper[4667]: I0131 04:08:01.504767 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b9d9fcc56-wmjp8" event={"ID":"d8b59858-7b18-4bad-b555-b978f3fbea56","Type":"ContainerStarted","Data":"cb091add9d670a021431650ee19e102836401117a990d248e18e7bedfa33a16a"} Jan 31 04:08:01 crc kubenswrapper[4667]: I0131 04:08:01.504782 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b9d9fcc56-wmjp8" event={"ID":"d8b59858-7b18-4bad-b555-b978f3fbea56","Type":"ContainerStarted","Data":"bc54eea42dec5f331937ec3450f81213c82e4144b6ea0918dee672dc8058ab7e"} Jan 31 04:08:01 crc kubenswrapper[4667]: I0131 04:08:01.504893 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7b9d9fcc56-wmjp8" Jan 31 04:08:01 crc kubenswrapper[4667]: I0131 04:08:01.504936 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7b9d9fcc56-wmjp8" Jan 31 04:08:01 crc kubenswrapper[4667]: I0131 04:08:01.506720 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-666d645645-4kb44" event={"ID":"efc85fb0-e1c4-4a14-aeeb-a0526ff668d1","Type":"ContainerStarted","Data":"72fad79407be20fff9c1510cc58d87ecfb13d329d7d52fe47f4b78f759fd917f"} Jan 31 04:08:01 crc kubenswrapper[4667]: I0131 04:08:01.531924 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5d8b947646-tj8c8" podStartSLOduration=4.892633673 podStartE2EDuration="10.531894128s" podCreationTimestamp="2026-01-31 04:07:51 +0000 UTC" firstStartedPulling="2026-01-31 04:07:54.243340087 +0000 UTC m=+1197.759675386" lastFinishedPulling="2026-01-31 04:07:59.882600542 +0000 UTC m=+1203.398935841" observedRunningTime="2026-01-31 04:08:01.527833031 +0000 UTC m=+1205.044168330" watchObservedRunningTime="2026-01-31 04:08:01.531894128 +0000 UTC m=+1205.048229437" Jan 31 04:08:01 crc kubenswrapper[4667]: I0131 04:08:01.565217 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-666d645645-4kb44" podStartSLOduration=5.119829714 podStartE2EDuration="10.565191441s" podCreationTimestamp="2026-01-31 04:07:51 +0000 UTC" firstStartedPulling="2026-01-31 04:07:54.440944823 +0000 UTC m=+1197.957280122" lastFinishedPulling="2026-01-31 04:07:59.88630654 +0000 UTC m=+1203.402641849" observedRunningTime="2026-01-31 04:08:01.555688949 +0000 UTC m=+1205.072024258" watchObservedRunningTime="2026-01-31 04:08:01.565191441 +0000 UTC m=+1205.081526740" Jan 31 04:08:01 crc kubenswrapper[4667]: I0131 04:08:01.598620 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7b9d9fcc56-wmjp8" podStartSLOduration=5.598589836 podStartE2EDuration="5.598589836s" podCreationTimestamp="2026-01-31 04:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:08:01.583189397 +0000 UTC m=+1205.099524696" watchObservedRunningTime="2026-01-31 04:08:01.598589836 +0000 UTC m=+1205.114925135" Jan 31 04:08:01 crc kubenswrapper[4667]: I0131 04:08:01.756235 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-78789d8f44-5trmc" podUID="b7f8fd18-06a0-432e-8c17-c9b432b6ca69" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 31 04:08:01 crc kubenswrapper[4667]: I0131 04:08:01.756331 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-78789d8f44-5trmc" Jan 31 04:08:01 crc kubenswrapper[4667]: I0131 04:08:01.757976 4667 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"75959a94e1776a7025f344a57c090542bf63fb0615110c632e65e3a8c9188b18"} pod="openstack/horizon-78789d8f44-5trmc" containerMessage="Container horizon failed startup probe, will be restarted" Jan 31 04:08:01 crc kubenswrapper[4667]: I0131 04:08:01.758073 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-78789d8f44-5trmc" podUID="b7f8fd18-06a0-432e-8c17-c9b432b6ca69" containerName="horizon" containerID="cri-o://75959a94e1776a7025f344a57c090542bf63fb0615110c632e65e3a8c9188b18" gracePeriod=30 Jan 31 04:08:01 crc kubenswrapper[4667]: I0131 04:08:01.860252 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-86c748c4d6-2grmh" podUID="c6974567-3bea-447a-bb8b-ced22b6d34ce" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Jan 31 04:08:01 crc kubenswrapper[4667]: I0131 04:08:01.860346 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-86c748c4d6-2grmh" Jan 31 04:08:01 crc kubenswrapper[4667]: I0131 04:08:01.861292 4667 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"3fa239e2b62f1e7aacddff89f2ed28a743b788c82b3a5252236ff48d58158880"} pod="openstack/horizon-86c748c4d6-2grmh" containerMessage="Container horizon failed startup probe, will be restarted" Jan 31 04:08:01 crc kubenswrapper[4667]: I0131 04:08:01.861338 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-86c748c4d6-2grmh" podUID="c6974567-3bea-447a-bb8b-ced22b6d34ce" containerName="horizon" containerID="cri-o://3fa239e2b62f1e7aacddff89f2ed28a743b788c82b3a5252236ff48d58158880" gracePeriod=30 Jan 31 04:08:02 crc kubenswrapper[4667]: I0131 04:08:02.701979 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-l6h6r" Jan 31 04:08:02 crc kubenswrapper[4667]: I0131 04:08:02.804819 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-c58f7"] Jan 31 04:08:02 crc kubenswrapper[4667]: I0131 04:08:02.815461 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-c58f7" podUID="23ada731-7288-4699-9ae2-d1bde47a02a2" containerName="dnsmasq-dns" containerID="cri-o://04031cafab9c8ee2081c1d44fa5555b5c1ede2c62553aa59a7e3863f5e2cb39e" gracePeriod=10 Jan 31 04:08:03 crc kubenswrapper[4667]: I0131 04:08:03.545191 4667 generic.go:334] "Generic (PLEG): container finished" podID="23ada731-7288-4699-9ae2-d1bde47a02a2" containerID="04031cafab9c8ee2081c1d44fa5555b5c1ede2c62553aa59a7e3863f5e2cb39e" exitCode=0 Jan 31 04:08:03 crc kubenswrapper[4667]: I0131 04:08:03.545945 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-c58f7" event={"ID":"23ada731-7288-4699-9ae2-d1bde47a02a2","Type":"ContainerDied","Data":"04031cafab9c8ee2081c1d44fa5555b5c1ede2c62553aa59a7e3863f5e2cb39e"} Jan 31 04:08:03 crc kubenswrapper[4667]: I0131 04:08:03.546003 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-c58f7" event={"ID":"23ada731-7288-4699-9ae2-d1bde47a02a2","Type":"ContainerDied","Data":"8bf14bbfb0697cd8ca5db878235e1722f6a7221626113ed0a8b3441f92987685"} Jan 31 04:08:03 crc kubenswrapper[4667]: I0131 04:08:03.546016 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bf14bbfb0697cd8ca5db878235e1722f6a7221626113ed0a8b3441f92987685" Jan 31 04:08:03 crc kubenswrapper[4667]: I0131 04:08:03.642264 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-c58f7" Jan 31 04:08:03 crc kubenswrapper[4667]: I0131 04:08:03.720062 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23ada731-7288-4699-9ae2-d1bde47a02a2-dns-svc\") pod \"23ada731-7288-4699-9ae2-d1bde47a02a2\" (UID: \"23ada731-7288-4699-9ae2-d1bde47a02a2\") " Jan 31 04:08:03 crc kubenswrapper[4667]: I0131 04:08:03.720230 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23ada731-7288-4699-9ae2-d1bde47a02a2-ovsdbserver-nb\") pod \"23ada731-7288-4699-9ae2-d1bde47a02a2\" (UID: \"23ada731-7288-4699-9ae2-d1bde47a02a2\") " Jan 31 04:08:03 crc kubenswrapper[4667]: I0131 04:08:03.824660 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23ada731-7288-4699-9ae2-d1bde47a02a2-config\") pod \"23ada731-7288-4699-9ae2-d1bde47a02a2\" (UID: \"23ada731-7288-4699-9ae2-d1bde47a02a2\") " Jan 31 04:08:03 crc kubenswrapper[4667]: I0131 04:08:03.824748 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23ada731-7288-4699-9ae2-d1bde47a02a2-ovsdbserver-sb\") pod \"23ada731-7288-4699-9ae2-d1bde47a02a2\" (UID: \"23ada731-7288-4699-9ae2-d1bde47a02a2\") " Jan 31 04:08:03 crc kubenswrapper[4667]: I0131 04:08:03.825667 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srxvk\" (UniqueName: \"kubernetes.io/projected/23ada731-7288-4699-9ae2-d1bde47a02a2-kube-api-access-srxvk\") pod \"23ada731-7288-4699-9ae2-d1bde47a02a2\" (UID: \"23ada731-7288-4699-9ae2-d1bde47a02a2\") " Jan 31 04:08:03 crc kubenswrapper[4667]: I0131 04:08:03.825746 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23ada731-7288-4699-9ae2-d1bde47a02a2-dns-swift-storage-0\") pod \"23ada731-7288-4699-9ae2-d1bde47a02a2\" (UID: \"23ada731-7288-4699-9ae2-d1bde47a02a2\") " Jan 31 04:08:03 crc kubenswrapper[4667]: I0131 04:08:03.863416 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23ada731-7288-4699-9ae2-d1bde47a02a2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "23ada731-7288-4699-9ae2-d1bde47a02a2" (UID: "23ada731-7288-4699-9ae2-d1bde47a02a2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:08:03 crc kubenswrapper[4667]: I0131 04:08:03.891492 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23ada731-7288-4699-9ae2-d1bde47a02a2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "23ada731-7288-4699-9ae2-d1bde47a02a2" (UID: "23ada731-7288-4699-9ae2-d1bde47a02a2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:08:03 crc kubenswrapper[4667]: I0131 04:08:03.902884 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23ada731-7288-4699-9ae2-d1bde47a02a2-kube-api-access-srxvk" (OuterVolumeSpecName: "kube-api-access-srxvk") pod "23ada731-7288-4699-9ae2-d1bde47a02a2" (UID: "23ada731-7288-4699-9ae2-d1bde47a02a2"). InnerVolumeSpecName "kube-api-access-srxvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:08:03 crc kubenswrapper[4667]: I0131 04:08:03.928253 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srxvk\" (UniqueName: \"kubernetes.io/projected/23ada731-7288-4699-9ae2-d1bde47a02a2-kube-api-access-srxvk\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:03 crc kubenswrapper[4667]: I0131 04:08:03.928291 4667 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23ada731-7288-4699-9ae2-d1bde47a02a2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:03 crc kubenswrapper[4667]: I0131 04:08:03.928301 4667 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23ada731-7288-4699-9ae2-d1bde47a02a2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:04 crc kubenswrapper[4667]: I0131 04:08:04.027436 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23ada731-7288-4699-9ae2-d1bde47a02a2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "23ada731-7288-4699-9ae2-d1bde47a02a2" (UID: "23ada731-7288-4699-9ae2-d1bde47a02a2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:08:04 crc kubenswrapper[4667]: I0131 04:08:04.030372 4667 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23ada731-7288-4699-9ae2-d1bde47a02a2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:04 crc kubenswrapper[4667]: I0131 04:08:04.035261 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23ada731-7288-4699-9ae2-d1bde47a02a2-config" (OuterVolumeSpecName: "config") pod "23ada731-7288-4699-9ae2-d1bde47a02a2" (UID: "23ada731-7288-4699-9ae2-d1bde47a02a2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:08:04 crc kubenswrapper[4667]: I0131 04:08:04.073474 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23ada731-7288-4699-9ae2-d1bde47a02a2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "23ada731-7288-4699-9ae2-d1bde47a02a2" (UID: "23ada731-7288-4699-9ae2-d1bde47a02a2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:08:04 crc kubenswrapper[4667]: I0131 04:08:04.133946 4667 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23ada731-7288-4699-9ae2-d1bde47a02a2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:04 crc kubenswrapper[4667]: I0131 04:08:04.133980 4667 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23ada731-7288-4699-9ae2-d1bde47a02a2-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:04 crc kubenswrapper[4667]: I0131 04:08:04.554578 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-c58f7" Jan 31 04:08:04 crc kubenswrapper[4667]: I0131 04:08:04.597631 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-c58f7"] Jan 31 04:08:04 crc kubenswrapper[4667]: I0131 04:08:04.609373 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-c58f7"] Jan 31 04:08:04 crc kubenswrapper[4667]: I0131 04:08:04.676632 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 31 04:08:04 crc kubenswrapper[4667]: I0131 04:08:04.719558 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 04:08:05 crc kubenswrapper[4667]: I0131 04:08:05.304144 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23ada731-7288-4699-9ae2-d1bde47a02a2" path="/var/lib/kubelet/pods/23ada731-7288-4699-9ae2-d1bde47a02a2/volumes" Jan 31 04:08:05 crc kubenswrapper[4667]: I0131 04:08:05.564233 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="847c3d86-2c1d-4b19-9558-5c03c65e1539" containerName="cinder-scheduler" containerID="cri-o://45ba0f5bf2e9664f1fa0c323e72ce0181803c9fa89cafd2f96e094f86680a277" gracePeriod=30 Jan 31 04:08:05 crc kubenswrapper[4667]: I0131 04:08:05.564435 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="847c3d86-2c1d-4b19-9558-5c03c65e1539" containerName="probe" containerID="cri-o://7c5e912076e31ea7ada41c76fb183375c6e23b3958824be4a2c20ea2962c8b42" gracePeriod=30 Jan 31 04:08:06 crc kubenswrapper[4667]: I0131 04:08:06.576617 4667 generic.go:334] "Generic (PLEG): container finished" podID="847c3d86-2c1d-4b19-9558-5c03c65e1539" containerID="7c5e912076e31ea7ada41c76fb183375c6e23b3958824be4a2c20ea2962c8b42" exitCode=0 Jan 31 04:08:06 crc kubenswrapper[4667]: I0131 04:08:06.576698 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"847c3d86-2c1d-4b19-9558-5c03c65e1539","Type":"ContainerDied","Data":"7c5e912076e31ea7ada41c76fb183375c6e23b3958824be4a2c20ea2962c8b42"} Jan 31 04:08:06 crc kubenswrapper[4667]: I0131 04:08:06.873064 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6fb8dc74db-tdj6x" podUID="707e0230-af22-42d8-9d59-8ea928b3178c" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.166:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 04:08:06 crc kubenswrapper[4667]: I0131 04:08:06.873068 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6fb8dc74db-tdj6x" podUID="707e0230-af22-42d8-9d59-8ea928b3178c" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.166:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 04:08:07 crc kubenswrapper[4667]: I0131 04:08:07.877134 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6fb8dc74db-tdj6x" podUID="707e0230-af22-42d8-9d59-8ea928b3178c" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.166:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 04:08:07 crc kubenswrapper[4667]: I0131 04:08:07.877154 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6fb8dc74db-tdj6x" podUID="707e0230-af22-42d8-9d59-8ea928b3178c" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.166:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 04:08:09 crc kubenswrapper[4667]: I0131 04:08:09.113649 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7f96457d78-wrrfr" Jan 31 04:08:09 crc kubenswrapper[4667]: I0131 04:08:09.521302 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="c175848a-4645-42e7-8ccc-ab873e1ff7aa" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 04:08:09 crc kubenswrapper[4667]: I0131 04:08:09.608126 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-746c944c96-t4g84_43842154-1666-491b-b37a-061c1a7c2b90/neutron-httpd/2.log" Jan 31 04:08:09 crc kubenswrapper[4667]: I0131 04:08:09.609021 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-746c944c96-t4g84" Jan 31 04:08:09 crc kubenswrapper[4667]: I0131 04:08:09.618323 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-746c944c96-t4g84_43842154-1666-491b-b37a-061c1a7c2b90/neutron-httpd/2.log" Jan 31 04:08:09 crc kubenswrapper[4667]: I0131 04:08:09.618668 4667 generic.go:334] "Generic (PLEG): container finished" podID="43842154-1666-491b-b37a-061c1a7c2b90" containerID="15267c0b3933699e7d571eb96c51d917a3ecf248dccdd25d9641dad4133590cc" exitCode=0 Jan 31 04:08:09 crc kubenswrapper[4667]: I0131 04:08:09.618714 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-746c944c96-t4g84" event={"ID":"43842154-1666-491b-b37a-061c1a7c2b90","Type":"ContainerDied","Data":"15267c0b3933699e7d571eb96c51d917a3ecf248dccdd25d9641dad4133590cc"} Jan 31 04:08:09 crc kubenswrapper[4667]: I0131 04:08:09.618751 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-746c944c96-t4g84" event={"ID":"43842154-1666-491b-b37a-061c1a7c2b90","Type":"ContainerDied","Data":"d5792665b427db5423a942b5ae6e9824580cdc0be61cd232246d547cfa111570"} Jan 31 04:08:09 crc kubenswrapper[4667]: I0131 04:08:09.618770 4667 scope.go:117] "RemoveContainer" containerID="0ac01a4ee14d592410a8439010a709e1191bfd2ccef6b725b09419143daee58c" Jan 31 04:08:09 crc kubenswrapper[4667]: I0131 04:08:09.618900 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-746c944c96-t4g84" Jan 31 04:08:09 crc kubenswrapper[4667]: I0131 04:08:09.693204 4667 scope.go:117] "RemoveContainer" containerID="15267c0b3933699e7d571eb96c51d917a3ecf248dccdd25d9641dad4133590cc" Jan 31 04:08:09 crc kubenswrapper[4667]: I0131 04:08:09.695103 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7f96457d78-wrrfr" Jan 31 04:08:09 crc kubenswrapper[4667]: I0131 04:08:09.793518 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mmw4\" (UniqueName: \"kubernetes.io/projected/43842154-1666-491b-b37a-061c1a7c2b90-kube-api-access-7mmw4\") pod \"43842154-1666-491b-b37a-061c1a7c2b90\" (UID: \"43842154-1666-491b-b37a-061c1a7c2b90\") " Jan 31 04:08:09 crc kubenswrapper[4667]: I0131 04:08:09.793588 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/43842154-1666-491b-b37a-061c1a7c2b90-httpd-config\") pod \"43842154-1666-491b-b37a-061c1a7c2b90\" (UID: \"43842154-1666-491b-b37a-061c1a7c2b90\") " Jan 31 04:08:09 crc kubenswrapper[4667]: I0131 04:08:09.793626 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/43842154-1666-491b-b37a-061c1a7c2b90-config\") pod \"43842154-1666-491b-b37a-061c1a7c2b90\" (UID: \"43842154-1666-491b-b37a-061c1a7c2b90\") " Jan 31 04:08:09 crc kubenswrapper[4667]: I0131 04:08:09.793674 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43842154-1666-491b-b37a-061c1a7c2b90-combined-ca-bundle\") pod \"43842154-1666-491b-b37a-061c1a7c2b90\" (UID: \"43842154-1666-491b-b37a-061c1a7c2b90\") " Jan 31 04:08:09 crc kubenswrapper[4667]: I0131 04:08:09.793772 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/43842154-1666-491b-b37a-061c1a7c2b90-ovndb-tls-certs\") pod \"43842154-1666-491b-b37a-061c1a7c2b90\" (UID: \"43842154-1666-491b-b37a-061c1a7c2b90\") " Jan 31 04:08:09 crc kubenswrapper[4667]: I0131 04:08:09.814026 4667 scope.go:117] "RemoveContainer" containerID="0ac01a4ee14d592410a8439010a709e1191bfd2ccef6b725b09419143daee58c" Jan 31 04:08:09 crc kubenswrapper[4667]: E0131 04:08:09.816356 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ac01a4ee14d592410a8439010a709e1191bfd2ccef6b725b09419143daee58c\": container with ID starting with 0ac01a4ee14d592410a8439010a709e1191bfd2ccef6b725b09419143daee58c not found: ID does not exist" containerID="0ac01a4ee14d592410a8439010a709e1191bfd2ccef6b725b09419143daee58c" Jan 31 04:08:09 crc kubenswrapper[4667]: I0131 04:08:09.816394 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ac01a4ee14d592410a8439010a709e1191bfd2ccef6b725b09419143daee58c"} err="failed to get container status \"0ac01a4ee14d592410a8439010a709e1191bfd2ccef6b725b09419143daee58c\": rpc error: code = NotFound desc = could not find container \"0ac01a4ee14d592410a8439010a709e1191bfd2ccef6b725b09419143daee58c\": container with ID starting with 0ac01a4ee14d592410a8439010a709e1191bfd2ccef6b725b09419143daee58c not found: ID does not exist" Jan 31 04:08:09 crc kubenswrapper[4667]: I0131 04:08:09.816420 4667 scope.go:117] "RemoveContainer" containerID="15267c0b3933699e7d571eb96c51d917a3ecf248dccdd25d9641dad4133590cc" Jan 31 04:08:09 crc kubenswrapper[4667]: E0131 04:08:09.816929 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15267c0b3933699e7d571eb96c51d917a3ecf248dccdd25d9641dad4133590cc\": container with ID starting with 15267c0b3933699e7d571eb96c51d917a3ecf248dccdd25d9641dad4133590cc not found: ID does not exist" containerID="15267c0b3933699e7d571eb96c51d917a3ecf248dccdd25d9641dad4133590cc" Jan 31 04:08:09 crc kubenswrapper[4667]: I0131 04:08:09.816959 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15267c0b3933699e7d571eb96c51d917a3ecf248dccdd25d9641dad4133590cc"} err="failed to get container status \"15267c0b3933699e7d571eb96c51d917a3ecf248dccdd25d9641dad4133590cc\": rpc error: code = NotFound desc = could not find container \"15267c0b3933699e7d571eb96c51d917a3ecf248dccdd25d9641dad4133590cc\": container with ID starting with 15267c0b3933699e7d571eb96c51d917a3ecf248dccdd25d9641dad4133590cc not found: ID does not exist" Jan 31 04:08:09 crc kubenswrapper[4667]: I0131 04:08:09.832009 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="29617aa3-f0e0-4528-9ba6-1385314227d9" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.162:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 04:08:09 crc kubenswrapper[4667]: I0131 04:08:09.840503 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43842154-1666-491b-b37a-061c1a7c2b90-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "43842154-1666-491b-b37a-061c1a7c2b90" (UID: "43842154-1666-491b-b37a-061c1a7c2b90"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:09 crc kubenswrapper[4667]: I0131 04:08:09.841110 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43842154-1666-491b-b37a-061c1a7c2b90-kube-api-access-7mmw4" (OuterVolumeSpecName: "kube-api-access-7mmw4") pod "43842154-1666-491b-b37a-061c1a7c2b90" (UID: "43842154-1666-491b-b37a-061c1a7c2b90"). InnerVolumeSpecName "kube-api-access-7mmw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:08:09 crc kubenswrapper[4667]: I0131 04:08:09.896655 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mmw4\" (UniqueName: \"kubernetes.io/projected/43842154-1666-491b-b37a-061c1a7c2b90-kube-api-access-7mmw4\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:09 crc kubenswrapper[4667]: I0131 04:08:09.896691 4667 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/43842154-1666-491b-b37a-061c1a7c2b90-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.052296 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5f87b7b68-pjkwf"] Jan 31 04:08:10 crc kubenswrapper[4667]: E0131 04:08:10.056179 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43842154-1666-491b-b37a-061c1a7c2b90" containerName="neutron-httpd" Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.056211 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="43842154-1666-491b-b37a-061c1a7c2b90" containerName="neutron-httpd" Jan 31 04:08:10 crc kubenswrapper[4667]: E0131 04:08:10.056236 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43842154-1666-491b-b37a-061c1a7c2b90" containerName="neutron-api" Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.056244 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="43842154-1666-491b-b37a-061c1a7c2b90" containerName="neutron-api" Jan 31 04:08:10 crc kubenswrapper[4667]: E0131 04:08:10.056258 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23ada731-7288-4699-9ae2-d1bde47a02a2" containerName="init" Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.056266 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="23ada731-7288-4699-9ae2-d1bde47a02a2" containerName="init" Jan 31 04:08:10 crc kubenswrapper[4667]: E0131 04:08:10.056278 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23ada731-7288-4699-9ae2-d1bde47a02a2" containerName="dnsmasq-dns" Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.056285 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="23ada731-7288-4699-9ae2-d1bde47a02a2" containerName="dnsmasq-dns" Jan 31 04:08:10 crc kubenswrapper[4667]: E0131 04:08:10.056304 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43842154-1666-491b-b37a-061c1a7c2b90" containerName="neutron-httpd" Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.056311 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="43842154-1666-491b-b37a-061c1a7c2b90" containerName="neutron-httpd" Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.057738 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="43842154-1666-491b-b37a-061c1a7c2b90" containerName="neutron-httpd" Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.057764 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="23ada731-7288-4699-9ae2-d1bde47a02a2" containerName="dnsmasq-dns" Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.057774 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="43842154-1666-491b-b37a-061c1a7c2b90" containerName="neutron-httpd" Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.057786 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="43842154-1666-491b-b37a-061c1a7c2b90" containerName="neutron-api" Jan 31 04:08:10 crc kubenswrapper[4667]: E0131 04:08:10.058001 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43842154-1666-491b-b37a-061c1a7c2b90" containerName="neutron-httpd" Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.058010 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="43842154-1666-491b-b37a-061c1a7c2b90" containerName="neutron-httpd" Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.058188 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="43842154-1666-491b-b37a-061c1a7c2b90" containerName="neutron-httpd" Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.063560 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5f87b7b68-pjkwf" Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.113669 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95dba098-f46c-4948-ab9b-c05d9bf48660-config-data\") pod \"placement-5f87b7b68-pjkwf\" (UID: \"95dba098-f46c-4948-ab9b-c05d9bf48660\") " pod="openstack/placement-5f87b7b68-pjkwf" Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.113717 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/95dba098-f46c-4948-ab9b-c05d9bf48660-public-tls-certs\") pod \"placement-5f87b7b68-pjkwf\" (UID: \"95dba098-f46c-4948-ab9b-c05d9bf48660\") " pod="openstack/placement-5f87b7b68-pjkwf" Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.113774 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dba098-f46c-4948-ab9b-c05d9bf48660-combined-ca-bundle\") pod \"placement-5f87b7b68-pjkwf\" (UID: \"95dba098-f46c-4948-ab9b-c05d9bf48660\") " pod="openstack/placement-5f87b7b68-pjkwf" Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.113798 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95dba098-f46c-4948-ab9b-c05d9bf48660-internal-tls-certs\") pod \"placement-5f87b7b68-pjkwf\" (UID: \"95dba098-f46c-4948-ab9b-c05d9bf48660\") " pod="openstack/placement-5f87b7b68-pjkwf" Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.113825 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95dba098-f46c-4948-ab9b-c05d9bf48660-logs\") pod \"placement-5f87b7b68-pjkwf\" (UID: \"95dba098-f46c-4948-ab9b-c05d9bf48660\") " pod="openstack/placement-5f87b7b68-pjkwf" Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.113906 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s5vd\" (UniqueName: \"kubernetes.io/projected/95dba098-f46c-4948-ab9b-c05d9bf48660-kube-api-access-9s5vd\") pod \"placement-5f87b7b68-pjkwf\" (UID: \"95dba098-f46c-4948-ab9b-c05d9bf48660\") " pod="openstack/placement-5f87b7b68-pjkwf" Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.113929 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95dba098-f46c-4948-ab9b-c05d9bf48660-scripts\") pod \"placement-5f87b7b68-pjkwf\" (UID: \"95dba098-f46c-4948-ab9b-c05d9bf48660\") " pod="openstack/placement-5f87b7b68-pjkwf" Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.133097 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43842154-1666-491b-b37a-061c1a7c2b90-config" (OuterVolumeSpecName: "config") pod "43842154-1666-491b-b37a-061c1a7c2b90" (UID: "43842154-1666-491b-b37a-061c1a7c2b90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.136239 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43842154-1666-491b-b37a-061c1a7c2b90-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43842154-1666-491b-b37a-061c1a7c2b90" (UID: "43842154-1666-491b-b37a-061c1a7c2b90"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.144410 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5f87b7b68-pjkwf"] Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.207131 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43842154-1666-491b-b37a-061c1a7c2b90-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "43842154-1666-491b-b37a-061c1a7c2b90" (UID: "43842154-1666-491b-b37a-061c1a7c2b90"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.219455 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95dba098-f46c-4948-ab9b-c05d9bf48660-logs\") pod \"placement-5f87b7b68-pjkwf\" (UID: \"95dba098-f46c-4948-ab9b-c05d9bf48660\") " pod="openstack/placement-5f87b7b68-pjkwf" Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.219567 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9s5vd\" (UniqueName: \"kubernetes.io/projected/95dba098-f46c-4948-ab9b-c05d9bf48660-kube-api-access-9s5vd\") pod \"placement-5f87b7b68-pjkwf\" (UID: \"95dba098-f46c-4948-ab9b-c05d9bf48660\") " pod="openstack/placement-5f87b7b68-pjkwf" Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.219602 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95dba098-f46c-4948-ab9b-c05d9bf48660-scripts\") pod \"placement-5f87b7b68-pjkwf\" (UID: \"95dba098-f46c-4948-ab9b-c05d9bf48660\") " pod="openstack/placement-5f87b7b68-pjkwf" Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.219647 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95dba098-f46c-4948-ab9b-c05d9bf48660-config-data\") pod \"placement-5f87b7b68-pjkwf\" (UID: \"95dba098-f46c-4948-ab9b-c05d9bf48660\") " pod="openstack/placement-5f87b7b68-pjkwf" Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.219668 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/95dba098-f46c-4948-ab9b-c05d9bf48660-public-tls-certs\") pod \"placement-5f87b7b68-pjkwf\" (UID: \"95dba098-f46c-4948-ab9b-c05d9bf48660\") " pod="openstack/placement-5f87b7b68-pjkwf" Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.219713 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dba098-f46c-4948-ab9b-c05d9bf48660-combined-ca-bundle\") pod \"placement-5f87b7b68-pjkwf\" (UID: \"95dba098-f46c-4948-ab9b-c05d9bf48660\") " pod="openstack/placement-5f87b7b68-pjkwf" Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.219738 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95dba098-f46c-4948-ab9b-c05d9bf48660-internal-tls-certs\") pod \"placement-5f87b7b68-pjkwf\" (UID: \"95dba098-f46c-4948-ab9b-c05d9bf48660\") " pod="openstack/placement-5f87b7b68-pjkwf" Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.219791 4667 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43842154-1666-491b-b37a-061c1a7c2b90-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.219806 4667 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/43842154-1666-491b-b37a-061c1a7c2b90-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.219817 4667 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/43842154-1666-491b-b37a-061c1a7c2b90-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.221569 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95dba098-f46c-4948-ab9b-c05d9bf48660-logs\") pod \"placement-5f87b7b68-pjkwf\" (UID: \"95dba098-f46c-4948-ab9b-c05d9bf48660\") " pod="openstack/placement-5f87b7b68-pjkwf" Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.236135 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95dba098-f46c-4948-ab9b-c05d9bf48660-internal-tls-certs\") pod \"placement-5f87b7b68-pjkwf\" (UID: \"95dba098-f46c-4948-ab9b-c05d9bf48660\") " pod="openstack/placement-5f87b7b68-pjkwf" Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.242373 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95dba098-f46c-4948-ab9b-c05d9bf48660-config-data\") pod \"placement-5f87b7b68-pjkwf\" (UID: \"95dba098-f46c-4948-ab9b-c05d9bf48660\") " pod="openstack/placement-5f87b7b68-pjkwf" Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.245957 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s5vd\" (UniqueName: \"kubernetes.io/projected/95dba098-f46c-4948-ab9b-c05d9bf48660-kube-api-access-9s5vd\") pod \"placement-5f87b7b68-pjkwf\" (UID: \"95dba098-f46c-4948-ab9b-c05d9bf48660\") " pod="openstack/placement-5f87b7b68-pjkwf" Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.246490 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95dba098-f46c-4948-ab9b-c05d9bf48660-scripts\") pod \"placement-5f87b7b68-pjkwf\" (UID: \"95dba098-f46c-4948-ab9b-c05d9bf48660\") " pod="openstack/placement-5f87b7b68-pjkwf" Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.247230 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dba098-f46c-4948-ab9b-c05d9bf48660-combined-ca-bundle\") pod \"placement-5f87b7b68-pjkwf\" (UID: \"95dba098-f46c-4948-ab9b-c05d9bf48660\") " pod="openstack/placement-5f87b7b68-pjkwf" Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.251287 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/95dba098-f46c-4948-ab9b-c05d9bf48660-public-tls-certs\") pod \"placement-5f87b7b68-pjkwf\" (UID: \"95dba098-f46c-4948-ab9b-c05d9bf48660\") " pod="openstack/placement-5f87b7b68-pjkwf" Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.278912 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-746c944c96-t4g84"] Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.296578 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-746c944c96-t4g84"] Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.474663 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5f87b7b68-pjkwf" Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.652162 4667 generic.go:334] "Generic (PLEG): container finished" podID="847c3d86-2c1d-4b19-9558-5c03c65e1539" containerID="45ba0f5bf2e9664f1fa0c323e72ce0181803c9fa89cafd2f96e094f86680a277" exitCode=0 Jan 31 04:08:10 crc kubenswrapper[4667]: I0131 04:08:10.652216 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"847c3d86-2c1d-4b19-9558-5c03c65e1539","Type":"ContainerDied","Data":"45ba0f5bf2e9664f1fa0c323e72ce0181803c9fa89cafd2f96e094f86680a277"} Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.023000 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.159803 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/847c3d86-2c1d-4b19-9558-5c03c65e1539-config-data-custom\") pod \"847c3d86-2c1d-4b19-9558-5c03c65e1539\" (UID: \"847c3d86-2c1d-4b19-9558-5c03c65e1539\") " Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.159974 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/847c3d86-2c1d-4b19-9558-5c03c65e1539-etc-machine-id\") pod \"847c3d86-2c1d-4b19-9558-5c03c65e1539\" (UID: \"847c3d86-2c1d-4b19-9558-5c03c65e1539\") " Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.160042 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/847c3d86-2c1d-4b19-9558-5c03c65e1539-combined-ca-bundle\") pod \"847c3d86-2c1d-4b19-9558-5c03c65e1539\" (UID: \"847c3d86-2c1d-4b19-9558-5c03c65e1539\") " Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.160201 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ckgn\" (UniqueName: \"kubernetes.io/projected/847c3d86-2c1d-4b19-9558-5c03c65e1539-kube-api-access-2ckgn\") pod \"847c3d86-2c1d-4b19-9558-5c03c65e1539\" (UID: \"847c3d86-2c1d-4b19-9558-5c03c65e1539\") " Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.160263 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/847c3d86-2c1d-4b19-9558-5c03c65e1539-scripts\") pod \"847c3d86-2c1d-4b19-9558-5c03c65e1539\" (UID: \"847c3d86-2c1d-4b19-9558-5c03c65e1539\") " Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.160294 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/847c3d86-2c1d-4b19-9558-5c03c65e1539-config-data\") pod \"847c3d86-2c1d-4b19-9558-5c03c65e1539\" (UID: \"847c3d86-2c1d-4b19-9558-5c03c65e1539\") " Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.165726 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/847c3d86-2c1d-4b19-9558-5c03c65e1539-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "847c3d86-2c1d-4b19-9558-5c03c65e1539" (UID: "847c3d86-2c1d-4b19-9558-5c03c65e1539"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.202501 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/847c3d86-2c1d-4b19-9558-5c03c65e1539-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "847c3d86-2c1d-4b19-9558-5c03c65e1539" (UID: "847c3d86-2c1d-4b19-9558-5c03c65e1539"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.203230 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/847c3d86-2c1d-4b19-9558-5c03c65e1539-kube-api-access-2ckgn" (OuterVolumeSpecName: "kube-api-access-2ckgn") pod "847c3d86-2c1d-4b19-9558-5c03c65e1539" (UID: "847c3d86-2c1d-4b19-9558-5c03c65e1539"). InnerVolumeSpecName "kube-api-access-2ckgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.216809 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/847c3d86-2c1d-4b19-9558-5c03c65e1539-scripts" (OuterVolumeSpecName: "scripts") pod "847c3d86-2c1d-4b19-9558-5c03c65e1539" (UID: "847c3d86-2c1d-4b19-9558-5c03c65e1539"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.265068 4667 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/847c3d86-2c1d-4b19-9558-5c03c65e1539-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.265104 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ckgn\" (UniqueName: \"kubernetes.io/projected/847c3d86-2c1d-4b19-9558-5c03c65e1539-kube-api-access-2ckgn\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.265118 4667 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/847c3d86-2c1d-4b19-9558-5c03c65e1539-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.265128 4667 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/847c3d86-2c1d-4b19-9558-5c03c65e1539-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.325114 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7b9d9fcc56-wmjp8" podUID="d8b59858-7b18-4bad-b555-b978f3fbea56" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.168:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.325592 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7b9d9fcc56-wmjp8" podUID="d8b59858-7b18-4bad-b555-b978f3fbea56" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.168:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.331078 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/847c3d86-2c1d-4b19-9558-5c03c65e1539-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "847c3d86-2c1d-4b19-9558-5c03c65e1539" (UID: "847c3d86-2c1d-4b19-9558-5c03c65e1539"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.339629 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43842154-1666-491b-b37a-061c1a7c2b90" path="/var/lib/kubelet/pods/43842154-1666-491b-b37a-061c1a7c2b90/volumes" Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.344541 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5f87b7b68-pjkwf"] Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.369122 4667 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/847c3d86-2c1d-4b19-9558-5c03c65e1539-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.517008 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/847c3d86-2c1d-4b19-9558-5c03c65e1539-config-data" (OuterVolumeSpecName: "config-data") pod "847c3d86-2c1d-4b19-9558-5c03c65e1539" (UID: "847c3d86-2c1d-4b19-9558-5c03c65e1539"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.575283 4667 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/847c3d86-2c1d-4b19-9558-5c03c65e1539-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.670312 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"847c3d86-2c1d-4b19-9558-5c03c65e1539","Type":"ContainerDied","Data":"5abe44b586c1affd5ff29b87b7ae62a7cfc7581d04c5bc70078b4f6f38c68935"} Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.670382 4667 scope.go:117] "RemoveContainer" containerID="7c5e912076e31ea7ada41c76fb183375c6e23b3958824be4a2c20ea2962c8b42" Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.670578 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.679336 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f87b7b68-pjkwf" event={"ID":"95dba098-f46c-4948-ab9b-c05d9bf48660","Type":"ContainerStarted","Data":"76bdfe6478d372b9722b68be98144287a61a7abec794145b0eb722ba4747ce8d"} Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.679395 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f87b7b68-pjkwf" event={"ID":"95dba098-f46c-4948-ab9b-c05d9bf48660","Type":"ContainerStarted","Data":"2b5f5fb04a4d246c12c5b3aa85cb8945139f15f17241782b8906e9c06fa15531"} Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.705133 4667 scope.go:117] "RemoveContainer" containerID="45ba0f5bf2e9664f1fa0c323e72ce0181803c9fa89cafd2f96e094f86680a277" Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.762933 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.772667 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.822387 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 04:08:11 crc kubenswrapper[4667]: E0131 04:08:11.823306 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="847c3d86-2c1d-4b19-9558-5c03c65e1539" containerName="probe" Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.823378 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="847c3d86-2c1d-4b19-9558-5c03c65e1539" containerName="probe" Jan 31 04:08:11 crc kubenswrapper[4667]: E0131 04:08:11.823448 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="847c3d86-2c1d-4b19-9558-5c03c65e1539" containerName="cinder-scheduler" Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.823508 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="847c3d86-2c1d-4b19-9558-5c03c65e1539" containerName="cinder-scheduler" Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.823792 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="847c3d86-2c1d-4b19-9558-5c03c65e1539" containerName="probe" Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.823890 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="847c3d86-2c1d-4b19-9558-5c03c65e1539" containerName="cinder-scheduler" Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.825137 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.829454 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.888270 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.923458 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6fb8dc74db-tdj6x" podUID="707e0230-af22-42d8-9d59-8ea928b3178c" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.166:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.965164 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6fb8dc74db-tdj6x" podUID="707e0230-af22-42d8-9d59-8ea928b3178c" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.166:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.982914 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c7e2e3d6-d3b6-49cf-b414-0ee3d0c72d6a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c7e2e3d6-d3b6-49cf-b414-0ee3d0c72d6a\") " pod="openstack/cinder-scheduler-0" Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.983433 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhs6w\" (UniqueName: \"kubernetes.io/projected/c7e2e3d6-d3b6-49cf-b414-0ee3d0c72d6a-kube-api-access-jhs6w\") pod \"cinder-scheduler-0\" (UID: \"c7e2e3d6-d3b6-49cf-b414-0ee3d0c72d6a\") " pod="openstack/cinder-scheduler-0" Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.983593 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c7e2e3d6-d3b6-49cf-b414-0ee3d0c72d6a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c7e2e3d6-d3b6-49cf-b414-0ee3d0c72d6a\") " pod="openstack/cinder-scheduler-0" Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.983723 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e2e3d6-d3b6-49cf-b414-0ee3d0c72d6a-config-data\") pod \"cinder-scheduler-0\" (UID: \"c7e2e3d6-d3b6-49cf-b414-0ee3d0c72d6a\") " pod="openstack/cinder-scheduler-0" Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.983871 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e2e3d6-d3b6-49cf-b414-0ee3d0c72d6a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c7e2e3d6-d3b6-49cf-b414-0ee3d0c72d6a\") " pod="openstack/cinder-scheduler-0" Jan 31 04:08:11 crc kubenswrapper[4667]: I0131 04:08:11.984068 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7e2e3d6-d3b6-49cf-b414-0ee3d0c72d6a-scripts\") pod \"cinder-scheduler-0\" (UID: \"c7e2e3d6-d3b6-49cf-b414-0ee3d0c72d6a\") " pod="openstack/cinder-scheduler-0" Jan 31 04:08:12 crc kubenswrapper[4667]: I0131 04:08:12.085910 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7e2e3d6-d3b6-49cf-b414-0ee3d0c72d6a-scripts\") pod \"cinder-scheduler-0\" (UID: \"c7e2e3d6-d3b6-49cf-b414-0ee3d0c72d6a\") " pod="openstack/cinder-scheduler-0" Jan 31 04:08:12 crc kubenswrapper[4667]: I0131 04:08:12.087131 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c7e2e3d6-d3b6-49cf-b414-0ee3d0c72d6a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c7e2e3d6-d3b6-49cf-b414-0ee3d0c72d6a\") " pod="openstack/cinder-scheduler-0" Jan 31 04:08:12 crc kubenswrapper[4667]: I0131 04:08:12.087225 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c7e2e3d6-d3b6-49cf-b414-0ee3d0c72d6a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c7e2e3d6-d3b6-49cf-b414-0ee3d0c72d6a\") " pod="openstack/cinder-scheduler-0" Jan 31 04:08:12 crc kubenswrapper[4667]: I0131 04:08:12.087398 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhs6w\" (UniqueName: \"kubernetes.io/projected/c7e2e3d6-d3b6-49cf-b414-0ee3d0c72d6a-kube-api-access-jhs6w\") pod \"cinder-scheduler-0\" (UID: \"c7e2e3d6-d3b6-49cf-b414-0ee3d0c72d6a\") " pod="openstack/cinder-scheduler-0" Jan 31 04:08:12 crc kubenswrapper[4667]: I0131 04:08:12.087541 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c7e2e3d6-d3b6-49cf-b414-0ee3d0c72d6a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c7e2e3d6-d3b6-49cf-b414-0ee3d0c72d6a\") " pod="openstack/cinder-scheduler-0" Jan 31 04:08:12 crc kubenswrapper[4667]: I0131 04:08:12.087657 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e2e3d6-d3b6-49cf-b414-0ee3d0c72d6a-config-data\") pod \"cinder-scheduler-0\" (UID: \"c7e2e3d6-d3b6-49cf-b414-0ee3d0c72d6a\") " pod="openstack/cinder-scheduler-0" Jan 31 04:08:12 crc kubenswrapper[4667]: I0131 04:08:12.087741 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e2e3d6-d3b6-49cf-b414-0ee3d0c72d6a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c7e2e3d6-d3b6-49cf-b414-0ee3d0c72d6a\") " pod="openstack/cinder-scheduler-0" Jan 31 04:08:12 crc kubenswrapper[4667]: I0131 04:08:12.102163 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c7e2e3d6-d3b6-49cf-b414-0ee3d0c72d6a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c7e2e3d6-d3b6-49cf-b414-0ee3d0c72d6a\") " pod="openstack/cinder-scheduler-0" Jan 31 04:08:12 crc kubenswrapper[4667]: I0131 04:08:12.103680 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7e2e3d6-d3b6-49cf-b414-0ee3d0c72d6a-scripts\") pod \"cinder-scheduler-0\" (UID: \"c7e2e3d6-d3b6-49cf-b414-0ee3d0c72d6a\") " pod="openstack/cinder-scheduler-0" Jan 31 04:08:12 crc kubenswrapper[4667]: I0131 04:08:12.104249 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e2e3d6-d3b6-49cf-b414-0ee3d0c72d6a-config-data\") pod \"cinder-scheduler-0\" (UID: \"c7e2e3d6-d3b6-49cf-b414-0ee3d0c72d6a\") " pod="openstack/cinder-scheduler-0" Jan 31 04:08:12 crc kubenswrapper[4667]: I0131 04:08:12.108577 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e2e3d6-d3b6-49cf-b414-0ee3d0c72d6a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c7e2e3d6-d3b6-49cf-b414-0ee3d0c72d6a\") " pod="openstack/cinder-scheduler-0" Jan 31 04:08:12 crc kubenswrapper[4667]: I0131 04:08:12.125521 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhs6w\" (UniqueName: \"kubernetes.io/projected/c7e2e3d6-d3b6-49cf-b414-0ee3d0c72d6a-kube-api-access-jhs6w\") pod \"cinder-scheduler-0\" (UID: \"c7e2e3d6-d3b6-49cf-b414-0ee3d0c72d6a\") " pod="openstack/cinder-scheduler-0" Jan 31 04:08:12 crc kubenswrapper[4667]: I0131 04:08:12.183670 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 31 04:08:12 crc kubenswrapper[4667]: I0131 04:08:12.309996 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7b9d9fcc56-wmjp8" podUID="d8b59858-7b18-4bad-b555-b978f3fbea56" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.168:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 04:08:12 crc kubenswrapper[4667]: I0131 04:08:12.310437 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7b9d9fcc56-wmjp8" podUID="d8b59858-7b18-4bad-b555-b978f3fbea56" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.168:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 04:08:12 crc kubenswrapper[4667]: I0131 04:08:12.734303 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f87b7b68-pjkwf" event={"ID":"95dba098-f46c-4948-ab9b-c05d9bf48660","Type":"ContainerStarted","Data":"4fc82b3aaf8278fe0581e7938819e44a9758058756639fef831297dc0fa82e81"} Jan 31 04:08:12 crc kubenswrapper[4667]: I0131 04:08:12.734809 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5f87b7b68-pjkwf" Jan 31 04:08:12 crc kubenswrapper[4667]: I0131 04:08:12.734851 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5f87b7b68-pjkwf" Jan 31 04:08:12 crc kubenswrapper[4667]: I0131 04:08:12.776279 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5f87b7b68-pjkwf" podStartSLOduration=2.7762265729999998 podStartE2EDuration="2.776226573s" podCreationTimestamp="2026-01-31 04:08:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:08:12.759549212 +0000 UTC m=+1216.275884511" watchObservedRunningTime="2026-01-31 04:08:12.776226573 +0000 UTC m=+1216.292561872" Jan 31 04:08:12 crc kubenswrapper[4667]: I0131 04:08:12.846518 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 31 04:08:12 crc kubenswrapper[4667]: I0131 04:08:12.965006 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6fb8dc74db-tdj6x" podUID="707e0230-af22-42d8-9d59-8ea928b3178c" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.166:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 04:08:12 crc kubenswrapper[4667]: I0131 04:08:12.965378 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6fb8dc74db-tdj6x" podUID="707e0230-af22-42d8-9d59-8ea928b3178c" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.166:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 04:08:13 crc kubenswrapper[4667]: I0131 04:08:13.010359 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6fb8dc74db-tdj6x" Jan 31 04:08:13 crc kubenswrapper[4667]: I0131 04:08:13.033170 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6fb8dc74db-tdj6x" Jan 31 04:08:13 crc kubenswrapper[4667]: I0131 04:08:13.309594 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="847c3d86-2c1d-4b19-9558-5c03c65e1539" path="/var/lib/kubelet/pods/847c3d86-2c1d-4b19-9558-5c03c65e1539/volumes" Jan 31 04:08:13 crc kubenswrapper[4667]: I0131 04:08:13.641682 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5558665b54-mq2t5" Jan 31 04:08:13 crc kubenswrapper[4667]: I0131 04:08:13.823521 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c7e2e3d6-d3b6-49cf-b414-0ee3d0c72d6a","Type":"ContainerStarted","Data":"181f6597c32d9723cabfb1b6b034c48223bd0e91a68786fb62cd82a5ee8cb83c"} Jan 31 04:08:14 crc kubenswrapper[4667]: I0131 04:08:14.852929 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c7e2e3d6-d3b6-49cf-b414-0ee3d0c72d6a","Type":"ContainerStarted","Data":"88d036cd4e705b40eeb62d7145d3a9279e27ef7072e5a52523fa2189473f1cd7"} Jan 31 04:08:14 crc kubenswrapper[4667]: I0131 04:08:14.873102 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="29617aa3-f0e0-4528-9ba6-1385314227d9" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.162:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 04:08:15 crc kubenswrapper[4667]: I0131 04:08:15.265429 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 31 04:08:15 crc kubenswrapper[4667]: I0131 04:08:15.266776 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 31 04:08:15 crc kubenswrapper[4667]: I0131 04:08:15.270603 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-5btdr" Jan 31 04:08:15 crc kubenswrapper[4667]: I0131 04:08:15.272051 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 31 04:08:15 crc kubenswrapper[4667]: I0131 04:08:15.293423 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 31 04:08:15 crc kubenswrapper[4667]: I0131 04:08:15.304790 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 31 04:08:15 crc kubenswrapper[4667]: I0131 04:08:15.400120 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c47c09d9-21e3-4c10-936f-0d679cf6a8f1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c47c09d9-21e3-4c10-936f-0d679cf6a8f1\") " pod="openstack/openstackclient" Jan 31 04:08:15 crc kubenswrapper[4667]: I0131 04:08:15.400191 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c47c09d9-21e3-4c10-936f-0d679cf6a8f1-openstack-config-secret\") pod \"openstackclient\" (UID: \"c47c09d9-21e3-4c10-936f-0d679cf6a8f1\") " pod="openstack/openstackclient" Jan 31 04:08:15 crc kubenswrapper[4667]: I0131 04:08:15.400227 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kgtq\" (UniqueName: \"kubernetes.io/projected/c47c09d9-21e3-4c10-936f-0d679cf6a8f1-kube-api-access-5kgtq\") pod \"openstackclient\" (UID: \"c47c09d9-21e3-4c10-936f-0d679cf6a8f1\") " pod="openstack/openstackclient" Jan 31 04:08:15 crc kubenswrapper[4667]: I0131 04:08:15.400341 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c47c09d9-21e3-4c10-936f-0d679cf6a8f1-openstack-config\") pod \"openstackclient\" (UID: \"c47c09d9-21e3-4c10-936f-0d679cf6a8f1\") " pod="openstack/openstackclient" Jan 31 04:08:15 crc kubenswrapper[4667]: I0131 04:08:15.501981 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c47c09d9-21e3-4c10-936f-0d679cf6a8f1-openstack-config-secret\") pod \"openstackclient\" (UID: \"c47c09d9-21e3-4c10-936f-0d679cf6a8f1\") " pod="openstack/openstackclient" Jan 31 04:08:15 crc kubenswrapper[4667]: I0131 04:08:15.502031 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kgtq\" (UniqueName: \"kubernetes.io/projected/c47c09d9-21e3-4c10-936f-0d679cf6a8f1-kube-api-access-5kgtq\") pod \"openstackclient\" (UID: \"c47c09d9-21e3-4c10-936f-0d679cf6a8f1\") " pod="openstack/openstackclient" Jan 31 04:08:15 crc kubenswrapper[4667]: I0131 04:08:15.502150 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c47c09d9-21e3-4c10-936f-0d679cf6a8f1-openstack-config\") pod \"openstackclient\" (UID: \"c47c09d9-21e3-4c10-936f-0d679cf6a8f1\") " pod="openstack/openstackclient" Jan 31 04:08:15 crc kubenswrapper[4667]: I0131 04:08:15.502194 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c47c09d9-21e3-4c10-936f-0d679cf6a8f1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c47c09d9-21e3-4c10-936f-0d679cf6a8f1\") " pod="openstack/openstackclient" Jan 31 04:08:15 crc kubenswrapper[4667]: I0131 04:08:15.506515 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c47c09d9-21e3-4c10-936f-0d679cf6a8f1-openstack-config\") pod \"openstackclient\" (UID: \"c47c09d9-21e3-4c10-936f-0d679cf6a8f1\") " pod="openstack/openstackclient" Jan 31 04:08:15 crc kubenswrapper[4667]: I0131 04:08:15.519570 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c47c09d9-21e3-4c10-936f-0d679cf6a8f1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c47c09d9-21e3-4c10-936f-0d679cf6a8f1\") " pod="openstack/openstackclient" Jan 31 04:08:15 crc kubenswrapper[4667]: I0131 04:08:15.531567 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c47c09d9-21e3-4c10-936f-0d679cf6a8f1-openstack-config-secret\") pod \"openstackclient\" (UID: \"c47c09d9-21e3-4c10-936f-0d679cf6a8f1\") " pod="openstack/openstackclient" Jan 31 04:08:15 crc kubenswrapper[4667]: I0131 04:08:15.569336 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kgtq\" (UniqueName: \"kubernetes.io/projected/c47c09d9-21e3-4c10-936f-0d679cf6a8f1-kube-api-access-5kgtq\") pod \"openstackclient\" (UID: \"c47c09d9-21e3-4c10-936f-0d679cf6a8f1\") " pod="openstack/openstackclient" Jan 31 04:08:15 crc kubenswrapper[4667]: I0131 04:08:15.594247 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 31 04:08:15 crc kubenswrapper[4667]: I0131 04:08:15.706358 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:08:15 crc kubenswrapper[4667]: I0131 04:08:15.706431 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:08:15 crc kubenswrapper[4667]: I0131 04:08:15.870315 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c7e2e3d6-d3b6-49cf-b414-0ee3d0c72d6a","Type":"ContainerStarted","Data":"09beadedefba1368824993b3e09dc002add955a3c4c5db8e9b6790b7bfe69ca6"} Jan 31 04:08:15 crc kubenswrapper[4667]: I0131 04:08:15.906036 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.90601297 podStartE2EDuration="4.90601297s" podCreationTimestamp="2026-01-31 04:08:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:08:15.896092977 +0000 UTC m=+1219.412428276" watchObservedRunningTime="2026-01-31 04:08:15.90601297 +0000 UTC m=+1219.422348269" Jan 31 04:08:16 crc kubenswrapper[4667]: I0131 04:08:16.379955 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 31 04:08:16 crc kubenswrapper[4667]: I0131 04:08:16.570088 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7b9d9fcc56-wmjp8" podUID="d8b59858-7b18-4bad-b555-b978f3fbea56" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.168:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 04:08:16 crc kubenswrapper[4667]: I0131 04:08:16.882653 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c47c09d9-21e3-4c10-936f-0d679cf6a8f1","Type":"ContainerStarted","Data":"e74de46e0f2c34739db5741d661dac181a248deaf3ddc779c61aa8c00212002b"} Jan 31 04:08:17 crc kubenswrapper[4667]: I0131 04:08:17.184552 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 31 04:08:17 crc kubenswrapper[4667]: I0131 04:08:17.320056 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7b9d9fcc56-wmjp8" podUID="d8b59858-7b18-4bad-b555-b978f3fbea56" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.168:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 04:08:17 crc kubenswrapper[4667]: I0131 04:08:17.320515 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7b9d9fcc56-wmjp8" podUID="d8b59858-7b18-4bad-b555-b978f3fbea56" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.168:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 04:08:17 crc kubenswrapper[4667]: I0131 04:08:17.345386 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7b9d9fcc56-wmjp8" Jan 31 04:08:17 crc kubenswrapper[4667]: I0131 04:08:17.352011 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7b9d9fcc56-wmjp8" Jan 31 04:08:17 crc kubenswrapper[4667]: I0131 04:08:17.475196 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6fb8dc74db-tdj6x"] Jan 31 04:08:17 crc kubenswrapper[4667]: I0131 04:08:17.475488 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6fb8dc74db-tdj6x" podUID="707e0230-af22-42d8-9d59-8ea928b3178c" containerName="barbican-api-log" containerID="cri-o://a1afd66ffbade368b61e6fdee49de378fa084e0f40cadeac24baeaf05305f8cc" gracePeriod=30 Jan 31 04:08:17 crc kubenswrapper[4667]: I0131 04:08:17.475651 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6fb8dc74db-tdj6x" podUID="707e0230-af22-42d8-9d59-8ea928b3178c" containerName="barbican-api" containerID="cri-o://85abfdaf0b5e2a34ed14e24152198b322f8ad1b18fc601fa4aa97cd296831c03" gracePeriod=30 Jan 31 04:08:18 crc kubenswrapper[4667]: I0131 04:08:18.036097 4667 generic.go:334] "Generic (PLEG): container finished" podID="707e0230-af22-42d8-9d59-8ea928b3178c" containerID="a1afd66ffbade368b61e6fdee49de378fa084e0f40cadeac24baeaf05305f8cc" exitCode=143 Jan 31 04:08:18 crc kubenswrapper[4667]: I0131 04:08:18.047395 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6fb8dc74db-tdj6x" event={"ID":"707e0230-af22-42d8-9d59-8ea928b3178c","Type":"ContainerDied","Data":"a1afd66ffbade368b61e6fdee49de378fa084e0f40cadeac24baeaf05305f8cc"} Jan 31 04:08:19 crc kubenswrapper[4667]: I0131 04:08:19.914187 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="29617aa3-f0e0-4528-9ba6-1385314227d9" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.162:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 04:08:20 crc kubenswrapper[4667]: I0131 04:08:20.840010 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6fb8dc74db-tdj6x" podUID="707e0230-af22-42d8-9d59-8ea928b3178c" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.166:9311/healthcheck\": read tcp 10.217.0.2:58146->10.217.0.166:9311: read: connection reset by peer" Jan 31 04:08:20 crc kubenswrapper[4667]: I0131 04:08:20.840097 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6fb8dc74db-tdj6x" podUID="707e0230-af22-42d8-9d59-8ea928b3178c" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.166:9311/healthcheck\": read tcp 10.217.0.2:58158->10.217.0.166:9311: read: connection reset by peer" Jan 31 04:08:21 crc kubenswrapper[4667]: I0131 04:08:21.086551 4667 generic.go:334] "Generic (PLEG): container finished" podID="707e0230-af22-42d8-9d59-8ea928b3178c" containerID="85abfdaf0b5e2a34ed14e24152198b322f8ad1b18fc601fa4aa97cd296831c03" exitCode=0 Jan 31 04:08:21 crc kubenswrapper[4667]: I0131 04:08:21.086998 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6fb8dc74db-tdj6x" event={"ID":"707e0230-af22-42d8-9d59-8ea928b3178c","Type":"ContainerDied","Data":"85abfdaf0b5e2a34ed14e24152198b322f8ad1b18fc601fa4aa97cd296831c03"} Jan 31 04:08:21 crc kubenswrapper[4667]: I0131 04:08:21.354322 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6fb8dc74db-tdj6x" Jan 31 04:08:21 crc kubenswrapper[4667]: I0131 04:08:21.454602 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/707e0230-af22-42d8-9d59-8ea928b3178c-logs\") pod \"707e0230-af22-42d8-9d59-8ea928b3178c\" (UID: \"707e0230-af22-42d8-9d59-8ea928b3178c\") " Jan 31 04:08:21 crc kubenswrapper[4667]: I0131 04:08:21.454701 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9gkd\" (UniqueName: \"kubernetes.io/projected/707e0230-af22-42d8-9d59-8ea928b3178c-kube-api-access-v9gkd\") pod \"707e0230-af22-42d8-9d59-8ea928b3178c\" (UID: \"707e0230-af22-42d8-9d59-8ea928b3178c\") " Jan 31 04:08:21 crc kubenswrapper[4667]: I0131 04:08:21.455437 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/707e0230-af22-42d8-9d59-8ea928b3178c-logs" (OuterVolumeSpecName: "logs") pod "707e0230-af22-42d8-9d59-8ea928b3178c" (UID: "707e0230-af22-42d8-9d59-8ea928b3178c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:08:21 crc kubenswrapper[4667]: I0131 04:08:21.456031 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/707e0230-af22-42d8-9d59-8ea928b3178c-config-data\") pod \"707e0230-af22-42d8-9d59-8ea928b3178c\" (UID: \"707e0230-af22-42d8-9d59-8ea928b3178c\") " Jan 31 04:08:21 crc kubenswrapper[4667]: I0131 04:08:21.456517 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/707e0230-af22-42d8-9d59-8ea928b3178c-combined-ca-bundle\") pod \"707e0230-af22-42d8-9d59-8ea928b3178c\" (UID: \"707e0230-af22-42d8-9d59-8ea928b3178c\") " Jan 31 04:08:21 crc kubenswrapper[4667]: I0131 04:08:21.456582 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/707e0230-af22-42d8-9d59-8ea928b3178c-config-data-custom\") pod \"707e0230-af22-42d8-9d59-8ea928b3178c\" (UID: \"707e0230-af22-42d8-9d59-8ea928b3178c\") " Jan 31 04:08:21 crc kubenswrapper[4667]: I0131 04:08:21.457153 4667 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/707e0230-af22-42d8-9d59-8ea928b3178c-logs\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:21 crc kubenswrapper[4667]: I0131 04:08:21.466765 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/707e0230-af22-42d8-9d59-8ea928b3178c-kube-api-access-v9gkd" (OuterVolumeSpecName: "kube-api-access-v9gkd") pod "707e0230-af22-42d8-9d59-8ea928b3178c" (UID: "707e0230-af22-42d8-9d59-8ea928b3178c"). InnerVolumeSpecName "kube-api-access-v9gkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:08:21 crc kubenswrapper[4667]: I0131 04:08:21.466983 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/707e0230-af22-42d8-9d59-8ea928b3178c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "707e0230-af22-42d8-9d59-8ea928b3178c" (UID: "707e0230-af22-42d8-9d59-8ea928b3178c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:21 crc kubenswrapper[4667]: I0131 04:08:21.496137 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/707e0230-af22-42d8-9d59-8ea928b3178c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "707e0230-af22-42d8-9d59-8ea928b3178c" (UID: "707e0230-af22-42d8-9d59-8ea928b3178c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:21 crc kubenswrapper[4667]: I0131 04:08:21.514911 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/707e0230-af22-42d8-9d59-8ea928b3178c-config-data" (OuterVolumeSpecName: "config-data") pod "707e0230-af22-42d8-9d59-8ea928b3178c" (UID: "707e0230-af22-42d8-9d59-8ea928b3178c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:21 crc kubenswrapper[4667]: I0131 04:08:21.560332 4667 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/707e0230-af22-42d8-9d59-8ea928b3178c-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:21 crc kubenswrapper[4667]: I0131 04:08:21.560994 4667 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/707e0230-af22-42d8-9d59-8ea928b3178c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:21 crc kubenswrapper[4667]: I0131 04:08:21.561057 4667 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/707e0230-af22-42d8-9d59-8ea928b3178c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:21 crc kubenswrapper[4667]: I0131 04:08:21.561153 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9gkd\" (UniqueName: \"kubernetes.io/projected/707e0230-af22-42d8-9d59-8ea928b3178c-kube-api-access-v9gkd\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.109658 4667 generic.go:334] "Generic (PLEG): container finished" podID="c175848a-4645-42e7-8ccc-ab873e1ff7aa" containerID="01eb1079afd2af8f3389078687891d730c75bad84b6638edecff44816261ec2e" exitCode=137 Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.109754 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c175848a-4645-42e7-8ccc-ab873e1ff7aa","Type":"ContainerDied","Data":"01eb1079afd2af8f3389078687891d730c75bad84b6638edecff44816261ec2e"} Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.124047 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6fb8dc74db-tdj6x" event={"ID":"707e0230-af22-42d8-9d59-8ea928b3178c","Type":"ContainerDied","Data":"6b1008f4cbdffc4d5bf5570b1211c92ccfe0a08346d0adb343500f1f74229778"} Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.124121 4667 scope.go:117] "RemoveContainer" containerID="85abfdaf0b5e2a34ed14e24152198b322f8ad1b18fc601fa4aa97cd296831c03" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.124175 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6fb8dc74db-tdj6x" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.166677 4667 scope.go:117] "RemoveContainer" containerID="a1afd66ffbade368b61e6fdee49de378fa084e0f40cadeac24baeaf05305f8cc" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.208753 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6fb8dc74db-tdj6x"] Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.283851 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6fb8dc74db-tdj6x"] Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.469137 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-8bff87d99-j8cd2"] Jan 31 04:08:22 crc kubenswrapper[4667]: E0131 04:08:22.469709 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="707e0230-af22-42d8-9d59-8ea928b3178c" containerName="barbican-api-log" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.469728 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="707e0230-af22-42d8-9d59-8ea928b3178c" containerName="barbican-api-log" Jan 31 04:08:22 crc kubenswrapper[4667]: E0131 04:08:22.469748 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="707e0230-af22-42d8-9d59-8ea928b3178c" containerName="barbican-api" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.469754 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="707e0230-af22-42d8-9d59-8ea928b3178c" containerName="barbican-api" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.469987 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="707e0230-af22-42d8-9d59-8ea928b3178c" containerName="barbican-api-log" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.470014 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="707e0230-af22-42d8-9d59-8ea928b3178c" containerName="barbican-api" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.471266 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-8bff87d99-j8cd2" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.483424 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.483469 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.483525 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.487868 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-8bff87d99-j8cd2"] Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.599610 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30fc5b26-45dd-42f8-9a58-7ba07c5aa56a-config-data\") pod \"swift-proxy-8bff87d99-j8cd2\" (UID: \"30fc5b26-45dd-42f8-9a58-7ba07c5aa56a\") " pod="openstack/swift-proxy-8bff87d99-j8cd2" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.599664 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30fc5b26-45dd-42f8-9a58-7ba07c5aa56a-run-httpd\") pod \"swift-proxy-8bff87d99-j8cd2\" (UID: \"30fc5b26-45dd-42f8-9a58-7ba07c5aa56a\") " pod="openstack/swift-proxy-8bff87d99-j8cd2" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.599721 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/30fc5b26-45dd-42f8-9a58-7ba07c5aa56a-etc-swift\") pod \"swift-proxy-8bff87d99-j8cd2\" (UID: \"30fc5b26-45dd-42f8-9a58-7ba07c5aa56a\") " pod="openstack/swift-proxy-8bff87d99-j8cd2" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.599746 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lmj2\" (UniqueName: \"kubernetes.io/projected/30fc5b26-45dd-42f8-9a58-7ba07c5aa56a-kube-api-access-7lmj2\") pod \"swift-proxy-8bff87d99-j8cd2\" (UID: \"30fc5b26-45dd-42f8-9a58-7ba07c5aa56a\") " pod="openstack/swift-proxy-8bff87d99-j8cd2" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.599788 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/30fc5b26-45dd-42f8-9a58-7ba07c5aa56a-internal-tls-certs\") pod \"swift-proxy-8bff87d99-j8cd2\" (UID: \"30fc5b26-45dd-42f8-9a58-7ba07c5aa56a\") " pod="openstack/swift-proxy-8bff87d99-j8cd2" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.599832 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30fc5b26-45dd-42f8-9a58-7ba07c5aa56a-combined-ca-bundle\") pod \"swift-proxy-8bff87d99-j8cd2\" (UID: \"30fc5b26-45dd-42f8-9a58-7ba07c5aa56a\") " pod="openstack/swift-proxy-8bff87d99-j8cd2" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.599873 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30fc5b26-45dd-42f8-9a58-7ba07c5aa56a-log-httpd\") pod \"swift-proxy-8bff87d99-j8cd2\" (UID: \"30fc5b26-45dd-42f8-9a58-7ba07c5aa56a\") " pod="openstack/swift-proxy-8bff87d99-j8cd2" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.599908 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/30fc5b26-45dd-42f8-9a58-7ba07c5aa56a-public-tls-certs\") pod \"swift-proxy-8bff87d99-j8cd2\" (UID: \"30fc5b26-45dd-42f8-9a58-7ba07c5aa56a\") " pod="openstack/swift-proxy-8bff87d99-j8cd2" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.670144 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.702028 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30fc5b26-45dd-42f8-9a58-7ba07c5aa56a-combined-ca-bundle\") pod \"swift-proxy-8bff87d99-j8cd2\" (UID: \"30fc5b26-45dd-42f8-9a58-7ba07c5aa56a\") " pod="openstack/swift-proxy-8bff87d99-j8cd2" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.702089 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30fc5b26-45dd-42f8-9a58-7ba07c5aa56a-log-httpd\") pod \"swift-proxy-8bff87d99-j8cd2\" (UID: \"30fc5b26-45dd-42f8-9a58-7ba07c5aa56a\") " pod="openstack/swift-proxy-8bff87d99-j8cd2" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.702137 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/30fc5b26-45dd-42f8-9a58-7ba07c5aa56a-public-tls-certs\") pod \"swift-proxy-8bff87d99-j8cd2\" (UID: \"30fc5b26-45dd-42f8-9a58-7ba07c5aa56a\") " pod="openstack/swift-proxy-8bff87d99-j8cd2" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.702169 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30fc5b26-45dd-42f8-9a58-7ba07c5aa56a-config-data\") pod \"swift-proxy-8bff87d99-j8cd2\" (UID: \"30fc5b26-45dd-42f8-9a58-7ba07c5aa56a\") " pod="openstack/swift-proxy-8bff87d99-j8cd2" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.702194 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30fc5b26-45dd-42f8-9a58-7ba07c5aa56a-run-httpd\") pod \"swift-proxy-8bff87d99-j8cd2\" (UID: \"30fc5b26-45dd-42f8-9a58-7ba07c5aa56a\") " pod="openstack/swift-proxy-8bff87d99-j8cd2" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.702242 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/30fc5b26-45dd-42f8-9a58-7ba07c5aa56a-etc-swift\") pod \"swift-proxy-8bff87d99-j8cd2\" (UID: \"30fc5b26-45dd-42f8-9a58-7ba07c5aa56a\") " pod="openstack/swift-proxy-8bff87d99-j8cd2" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.702272 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lmj2\" (UniqueName: \"kubernetes.io/projected/30fc5b26-45dd-42f8-9a58-7ba07c5aa56a-kube-api-access-7lmj2\") pod \"swift-proxy-8bff87d99-j8cd2\" (UID: \"30fc5b26-45dd-42f8-9a58-7ba07c5aa56a\") " pod="openstack/swift-proxy-8bff87d99-j8cd2" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.702313 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/30fc5b26-45dd-42f8-9a58-7ba07c5aa56a-internal-tls-certs\") pod \"swift-proxy-8bff87d99-j8cd2\" (UID: \"30fc5b26-45dd-42f8-9a58-7ba07c5aa56a\") " pod="openstack/swift-proxy-8bff87d99-j8cd2" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.704280 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30fc5b26-45dd-42f8-9a58-7ba07c5aa56a-run-httpd\") pod \"swift-proxy-8bff87d99-j8cd2\" (UID: \"30fc5b26-45dd-42f8-9a58-7ba07c5aa56a\") " pod="openstack/swift-proxy-8bff87d99-j8cd2" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.704540 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30fc5b26-45dd-42f8-9a58-7ba07c5aa56a-log-httpd\") pod \"swift-proxy-8bff87d99-j8cd2\" (UID: \"30fc5b26-45dd-42f8-9a58-7ba07c5aa56a\") " pod="openstack/swift-proxy-8bff87d99-j8cd2" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.711470 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/30fc5b26-45dd-42f8-9a58-7ba07c5aa56a-public-tls-certs\") pod \"swift-proxy-8bff87d99-j8cd2\" (UID: \"30fc5b26-45dd-42f8-9a58-7ba07c5aa56a\") " pod="openstack/swift-proxy-8bff87d99-j8cd2" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.711750 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/30fc5b26-45dd-42f8-9a58-7ba07c5aa56a-etc-swift\") pod \"swift-proxy-8bff87d99-j8cd2\" (UID: \"30fc5b26-45dd-42f8-9a58-7ba07c5aa56a\") " pod="openstack/swift-proxy-8bff87d99-j8cd2" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.712141 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30fc5b26-45dd-42f8-9a58-7ba07c5aa56a-config-data\") pod \"swift-proxy-8bff87d99-j8cd2\" (UID: \"30fc5b26-45dd-42f8-9a58-7ba07c5aa56a\") " pod="openstack/swift-proxy-8bff87d99-j8cd2" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.715323 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30fc5b26-45dd-42f8-9a58-7ba07c5aa56a-combined-ca-bundle\") pod \"swift-proxy-8bff87d99-j8cd2\" (UID: \"30fc5b26-45dd-42f8-9a58-7ba07c5aa56a\") " pod="openstack/swift-proxy-8bff87d99-j8cd2" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.722626 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/30fc5b26-45dd-42f8-9a58-7ba07c5aa56a-internal-tls-certs\") pod \"swift-proxy-8bff87d99-j8cd2\" (UID: \"30fc5b26-45dd-42f8-9a58-7ba07c5aa56a\") " pod="openstack/swift-proxy-8bff87d99-j8cd2" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.729586 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lmj2\" (UniqueName: \"kubernetes.io/projected/30fc5b26-45dd-42f8-9a58-7ba07c5aa56a-kube-api-access-7lmj2\") pod \"swift-proxy-8bff87d99-j8cd2\" (UID: \"30fc5b26-45dd-42f8-9a58-7ba07c5aa56a\") " pod="openstack/swift-proxy-8bff87d99-j8cd2" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.803295 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c175848a-4645-42e7-8ccc-ab873e1ff7aa-config-data\") pod \"c175848a-4645-42e7-8ccc-ab873e1ff7aa\" (UID: \"c175848a-4645-42e7-8ccc-ab873e1ff7aa\") " Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.803424 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c175848a-4645-42e7-8ccc-ab873e1ff7aa-log-httpd\") pod \"c175848a-4645-42e7-8ccc-ab873e1ff7aa\" (UID: \"c175848a-4645-42e7-8ccc-ab873e1ff7aa\") " Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.803484 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c175848a-4645-42e7-8ccc-ab873e1ff7aa-run-httpd\") pod \"c175848a-4645-42e7-8ccc-ab873e1ff7aa\" (UID: \"c175848a-4645-42e7-8ccc-ab873e1ff7aa\") " Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.803621 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bht9f\" (UniqueName: \"kubernetes.io/projected/c175848a-4645-42e7-8ccc-ab873e1ff7aa-kube-api-access-bht9f\") pod \"c175848a-4645-42e7-8ccc-ab873e1ff7aa\" (UID: \"c175848a-4645-42e7-8ccc-ab873e1ff7aa\") " Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.803740 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c175848a-4645-42e7-8ccc-ab873e1ff7aa-sg-core-conf-yaml\") pod \"c175848a-4645-42e7-8ccc-ab873e1ff7aa\" (UID: \"c175848a-4645-42e7-8ccc-ab873e1ff7aa\") " Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.803767 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c175848a-4645-42e7-8ccc-ab873e1ff7aa-combined-ca-bundle\") pod \"c175848a-4645-42e7-8ccc-ab873e1ff7aa\" (UID: \"c175848a-4645-42e7-8ccc-ab873e1ff7aa\") " Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.803788 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c175848a-4645-42e7-8ccc-ab873e1ff7aa-scripts\") pod \"c175848a-4645-42e7-8ccc-ab873e1ff7aa\" (UID: \"c175848a-4645-42e7-8ccc-ab873e1ff7aa\") " Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.804215 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c175848a-4645-42e7-8ccc-ab873e1ff7aa-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c175848a-4645-42e7-8ccc-ab873e1ff7aa" (UID: "c175848a-4645-42e7-8ccc-ab873e1ff7aa"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.804560 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c175848a-4645-42e7-8ccc-ab873e1ff7aa-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c175848a-4645-42e7-8ccc-ab873e1ff7aa" (UID: "c175848a-4645-42e7-8ccc-ab873e1ff7aa"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.823159 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c175848a-4645-42e7-8ccc-ab873e1ff7aa-scripts" (OuterVolumeSpecName: "scripts") pod "c175848a-4645-42e7-8ccc-ab873e1ff7aa" (UID: "c175848a-4645-42e7-8ccc-ab873e1ff7aa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.823211 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c175848a-4645-42e7-8ccc-ab873e1ff7aa-kube-api-access-bht9f" (OuterVolumeSpecName: "kube-api-access-bht9f") pod "c175848a-4645-42e7-8ccc-ab873e1ff7aa" (UID: "c175848a-4645-42e7-8ccc-ab873e1ff7aa"). InnerVolumeSpecName "kube-api-access-bht9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.832687 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-8bff87d99-j8cd2" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.840242 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.894463 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c175848a-4645-42e7-8ccc-ab873e1ff7aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c175848a-4645-42e7-8ccc-ab873e1ff7aa" (UID: "c175848a-4645-42e7-8ccc-ab873e1ff7aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.908761 4667 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c175848a-4645-42e7-8ccc-ab873e1ff7aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.908824 4667 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c175848a-4645-42e7-8ccc-ab873e1ff7aa-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.908859 4667 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c175848a-4645-42e7-8ccc-ab873e1ff7aa-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.908878 4667 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c175848a-4645-42e7-8ccc-ab873e1ff7aa-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.908893 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bht9f\" (UniqueName: \"kubernetes.io/projected/c175848a-4645-42e7-8ccc-ab873e1ff7aa-kube-api-access-bht9f\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.913405 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c175848a-4645-42e7-8ccc-ab873e1ff7aa-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c175848a-4645-42e7-8ccc-ab873e1ff7aa" (UID: "c175848a-4645-42e7-8ccc-ab873e1ff7aa"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:22 crc kubenswrapper[4667]: I0131 04:08:22.951963 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c175848a-4645-42e7-8ccc-ab873e1ff7aa-config-data" (OuterVolumeSpecName: "config-data") pod "c175848a-4645-42e7-8ccc-ab873e1ff7aa" (UID: "c175848a-4645-42e7-8ccc-ab873e1ff7aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:23 crc kubenswrapper[4667]: I0131 04:08:23.010930 4667 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c175848a-4645-42e7-8ccc-ab873e1ff7aa-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:23 crc kubenswrapper[4667]: I0131 04:08:23.010974 4667 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c175848a-4645-42e7-8ccc-ab873e1ff7aa-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:23 crc kubenswrapper[4667]: I0131 04:08:23.110347 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 31 04:08:23 crc kubenswrapper[4667]: I0131 04:08:23.193832 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:08:23 crc kubenswrapper[4667]: I0131 04:08:23.195117 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c175848a-4645-42e7-8ccc-ab873e1ff7aa","Type":"ContainerDied","Data":"f4b53ed96e656c378650b2049ae52b412d6038c17fdb0792effdbccfc9a88b45"} Jan 31 04:08:23 crc kubenswrapper[4667]: I0131 04:08:23.195190 4667 scope.go:117] "RemoveContainer" containerID="01eb1079afd2af8f3389078687891d730c75bad84b6638edecff44816261ec2e" Jan 31 04:08:23 crc kubenswrapper[4667]: I0131 04:08:23.271316 4667 scope.go:117] "RemoveContainer" containerID="3f956df323dcf5ea513fbf4fca63c5b6c48b46d1d34cec9e0da1d18d570c1f71" Jan 31 04:08:23 crc kubenswrapper[4667]: I0131 04:08:23.304480 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="707e0230-af22-42d8-9d59-8ea928b3178c" path="/var/lib/kubelet/pods/707e0230-af22-42d8-9d59-8ea928b3178c/volumes" Jan 31 04:08:23 crc kubenswrapper[4667]: I0131 04:08:23.347077 4667 scope.go:117] "RemoveContainer" containerID="5f3054a5c6f2254b318f9d5a214799bad34be30fa6a4ea2f5072c54c61f95f3b" Jan 31 04:08:23 crc kubenswrapper[4667]: I0131 04:08:23.378101 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:08:23 crc kubenswrapper[4667]: I0131 04:08:23.409013 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:08:23 crc kubenswrapper[4667]: I0131 04:08:23.431330 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:08:23 crc kubenswrapper[4667]: E0131 04:08:23.432342 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c175848a-4645-42e7-8ccc-ab873e1ff7aa" containerName="sg-core" Jan 31 04:08:23 crc kubenswrapper[4667]: I0131 04:08:23.432368 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="c175848a-4645-42e7-8ccc-ab873e1ff7aa" containerName="sg-core" Jan 31 04:08:23 crc kubenswrapper[4667]: E0131 04:08:23.432391 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c175848a-4645-42e7-8ccc-ab873e1ff7aa" containerName="proxy-httpd" Jan 31 04:08:23 crc kubenswrapper[4667]: I0131 04:08:23.432400 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="c175848a-4645-42e7-8ccc-ab873e1ff7aa" containerName="proxy-httpd" Jan 31 04:08:23 crc kubenswrapper[4667]: E0131 04:08:23.432445 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c175848a-4645-42e7-8ccc-ab873e1ff7aa" containerName="ceilometer-notification-agent" Jan 31 04:08:23 crc kubenswrapper[4667]: I0131 04:08:23.432452 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="c175848a-4645-42e7-8ccc-ab873e1ff7aa" containerName="ceilometer-notification-agent" Jan 31 04:08:23 crc kubenswrapper[4667]: I0131 04:08:23.432983 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="c175848a-4645-42e7-8ccc-ab873e1ff7aa" containerName="ceilometer-notification-agent" Jan 31 04:08:23 crc kubenswrapper[4667]: I0131 04:08:23.433027 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="c175848a-4645-42e7-8ccc-ab873e1ff7aa" containerName="proxy-httpd" Jan 31 04:08:23 crc kubenswrapper[4667]: I0131 04:08:23.433039 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="c175848a-4645-42e7-8ccc-ab873e1ff7aa" containerName="sg-core" Jan 31 04:08:23 crc kubenswrapper[4667]: I0131 04:08:23.438914 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:08:23 crc kubenswrapper[4667]: I0131 04:08:23.439997 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:08:23 crc kubenswrapper[4667]: I0131 04:08:23.446584 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 04:08:23 crc kubenswrapper[4667]: I0131 04:08:23.446823 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 04:08:23 crc kubenswrapper[4667]: I0131 04:08:23.542051 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47cfcedd-9367-442f-b907-a8a1738e30b8-scripts\") pod \"ceilometer-0\" (UID: \"47cfcedd-9367-442f-b907-a8a1738e30b8\") " pod="openstack/ceilometer-0" Jan 31 04:08:23 crc kubenswrapper[4667]: I0131 04:08:23.542098 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47cfcedd-9367-442f-b907-a8a1738e30b8-config-data\") pod \"ceilometer-0\" (UID: \"47cfcedd-9367-442f-b907-a8a1738e30b8\") " pod="openstack/ceilometer-0" Jan 31 04:08:23 crc kubenswrapper[4667]: I0131 04:08:23.542144 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/47cfcedd-9367-442f-b907-a8a1738e30b8-run-httpd\") pod \"ceilometer-0\" (UID: \"47cfcedd-9367-442f-b907-a8a1738e30b8\") " pod="openstack/ceilometer-0" Jan 31 04:08:23 crc kubenswrapper[4667]: I0131 04:08:23.542161 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47cfcedd-9367-442f-b907-a8a1738e30b8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"47cfcedd-9367-442f-b907-a8a1738e30b8\") " pod="openstack/ceilometer-0" Jan 31 04:08:23 crc kubenswrapper[4667]: I0131 04:08:23.542210 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7xpv\" (UniqueName: \"kubernetes.io/projected/47cfcedd-9367-442f-b907-a8a1738e30b8-kube-api-access-d7xpv\") pod \"ceilometer-0\" (UID: \"47cfcedd-9367-442f-b907-a8a1738e30b8\") " pod="openstack/ceilometer-0" Jan 31 04:08:23 crc kubenswrapper[4667]: I0131 04:08:23.542257 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/47cfcedd-9367-442f-b907-a8a1738e30b8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"47cfcedd-9367-442f-b907-a8a1738e30b8\") " pod="openstack/ceilometer-0" Jan 31 04:08:23 crc kubenswrapper[4667]: I0131 04:08:23.542288 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/47cfcedd-9367-442f-b907-a8a1738e30b8-log-httpd\") pod \"ceilometer-0\" (UID: \"47cfcedd-9367-442f-b907-a8a1738e30b8\") " pod="openstack/ceilometer-0" Jan 31 04:08:23 crc kubenswrapper[4667]: I0131 04:08:23.645513 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47cfcedd-9367-442f-b907-a8a1738e30b8-scripts\") pod \"ceilometer-0\" (UID: \"47cfcedd-9367-442f-b907-a8a1738e30b8\") " pod="openstack/ceilometer-0" Jan 31 04:08:23 crc kubenswrapper[4667]: I0131 04:08:23.646003 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47cfcedd-9367-442f-b907-a8a1738e30b8-config-data\") pod \"ceilometer-0\" (UID: \"47cfcedd-9367-442f-b907-a8a1738e30b8\") " pod="openstack/ceilometer-0" Jan 31 04:08:23 crc kubenswrapper[4667]: I0131 04:08:23.646087 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/47cfcedd-9367-442f-b907-a8a1738e30b8-run-httpd\") pod \"ceilometer-0\" (UID: \"47cfcedd-9367-442f-b907-a8a1738e30b8\") " pod="openstack/ceilometer-0" Jan 31 04:08:23 crc kubenswrapper[4667]: I0131 04:08:23.646141 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47cfcedd-9367-442f-b907-a8a1738e30b8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"47cfcedd-9367-442f-b907-a8a1738e30b8\") " pod="openstack/ceilometer-0" Jan 31 04:08:23 crc kubenswrapper[4667]: I0131 04:08:23.646226 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7xpv\" (UniqueName: \"kubernetes.io/projected/47cfcedd-9367-442f-b907-a8a1738e30b8-kube-api-access-d7xpv\") pod \"ceilometer-0\" (UID: \"47cfcedd-9367-442f-b907-a8a1738e30b8\") " pod="openstack/ceilometer-0" Jan 31 04:08:23 crc kubenswrapper[4667]: I0131 04:08:23.646308 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/47cfcedd-9367-442f-b907-a8a1738e30b8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"47cfcedd-9367-442f-b907-a8a1738e30b8\") " pod="openstack/ceilometer-0" Jan 31 04:08:23 crc kubenswrapper[4667]: I0131 04:08:23.646348 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/47cfcedd-9367-442f-b907-a8a1738e30b8-log-httpd\") pod \"ceilometer-0\" (UID: \"47cfcedd-9367-442f-b907-a8a1738e30b8\") " pod="openstack/ceilometer-0" Jan 31 04:08:23 crc kubenswrapper[4667]: I0131 04:08:23.647436 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/47cfcedd-9367-442f-b907-a8a1738e30b8-log-httpd\") pod \"ceilometer-0\" (UID: \"47cfcedd-9367-442f-b907-a8a1738e30b8\") " pod="openstack/ceilometer-0" Jan 31 04:08:23 crc kubenswrapper[4667]: I0131 04:08:23.647833 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/47cfcedd-9367-442f-b907-a8a1738e30b8-run-httpd\") pod \"ceilometer-0\" (UID: \"47cfcedd-9367-442f-b907-a8a1738e30b8\") " pod="openstack/ceilometer-0" Jan 31 04:08:23 crc kubenswrapper[4667]: I0131 04:08:23.670881 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/47cfcedd-9367-442f-b907-a8a1738e30b8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"47cfcedd-9367-442f-b907-a8a1738e30b8\") " pod="openstack/ceilometer-0" Jan 31 04:08:23 crc kubenswrapper[4667]: I0131 04:08:23.678987 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47cfcedd-9367-442f-b907-a8a1738e30b8-scripts\") pod \"ceilometer-0\" (UID: \"47cfcedd-9367-442f-b907-a8a1738e30b8\") " pod="openstack/ceilometer-0" Jan 31 04:08:23 crc kubenswrapper[4667]: I0131 04:08:23.682055 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47cfcedd-9367-442f-b907-a8a1738e30b8-config-data\") pod \"ceilometer-0\" (UID: \"47cfcedd-9367-442f-b907-a8a1738e30b8\") " pod="openstack/ceilometer-0" Jan 31 04:08:23 crc kubenswrapper[4667]: I0131 04:08:23.685652 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7xpv\" (UniqueName: \"kubernetes.io/projected/47cfcedd-9367-442f-b907-a8a1738e30b8-kube-api-access-d7xpv\") pod \"ceilometer-0\" (UID: \"47cfcedd-9367-442f-b907-a8a1738e30b8\") " pod="openstack/ceilometer-0" Jan 31 04:08:23 crc kubenswrapper[4667]: I0131 04:08:23.701562 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47cfcedd-9367-442f-b907-a8a1738e30b8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"47cfcedd-9367-442f-b907-a8a1738e30b8\") " pod="openstack/ceilometer-0" Jan 31 04:08:23 crc kubenswrapper[4667]: I0131 04:08:23.779169 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:08:24 crc kubenswrapper[4667]: I0131 04:08:24.032798 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-8bff87d99-j8cd2"] Jan 31 04:08:24 crc kubenswrapper[4667]: E0131 04:08:24.102703 4667 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/f4de68c86d9c3573ddc0788b001a3eae91dfcb7e29dc9ce1d8a3cab1257e4277/diff" to get inode usage: stat /var/lib/containers/storage/overlay/f4de68c86d9c3573ddc0788b001a3eae91dfcb7e29dc9ce1d8a3cab1257e4277/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_neutron-746c944c96-t4g84_43842154-1666-491b-b37a-061c1a7c2b90/neutron-api/0.log" to get inode usage: stat /var/log/pods/openstack_neutron-746c944c96-t4g84_43842154-1666-491b-b37a-061c1a7c2b90/neutron-api/0.log: no such file or directory Jan 31 04:08:24 crc kubenswrapper[4667]: E0131 04:08:24.202675 4667 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: , extraDiskErr: could not stat "/var/log/pods/openstack_dnsmasq-dns-55f844cf75-c58f7_23ada731-7288-4699-9ae2-d1bde47a02a2/dnsmasq-dns/0.log" to get inode usage: stat /var/log/pods/openstack_dnsmasq-dns-55f844cf75-c58f7_23ada731-7288-4699-9ae2-d1bde47a02a2/dnsmasq-dns/0.log: no such file or directory Jan 31 04:08:24 crc kubenswrapper[4667]: I0131 04:08:24.245584 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-8bff87d99-j8cd2" event={"ID":"30fc5b26-45dd-42f8-9a58-7ba07c5aa56a","Type":"ContainerStarted","Data":"62a3a22a2f1250290c587927740894e3c14aeeefdc08888609320ad4eb3a66b1"} Jan 31 04:08:24 crc kubenswrapper[4667]: I0131 04:08:24.610928 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:08:24 crc kubenswrapper[4667]: I0131 04:08:24.806200 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7f55cc74b5-gg8dl" Jan 31 04:08:24 crc kubenswrapper[4667]: I0131 04:08:24.896826 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7dc9f74cdf-w757n"] Jan 31 04:08:24 crc kubenswrapper[4667]: I0131 04:08:24.897674 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7dc9f74cdf-w757n" podUID="0de14766-3b67-45ce-a8d8-276f90ce6310" containerName="neutron-api" containerID="cri-o://d945eb277af255971ce21f9fbe29ebc3a76c0875c97b71da5a95149ba1c61844" gracePeriod=30 Jan 31 04:08:24 crc kubenswrapper[4667]: I0131 04:08:24.898287 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7dc9f74cdf-w757n" podUID="0de14766-3b67-45ce-a8d8-276f90ce6310" containerName="neutron-httpd" containerID="cri-o://4744d93f757062e772e2bad13de89f14714f329c6557f80834045603808a0be2" gracePeriod=30 Jan 31 04:08:25 crc kubenswrapper[4667]: I0131 04:08:25.285950 4667 generic.go:334] "Generic (PLEG): container finished" podID="0de14766-3b67-45ce-a8d8-276f90ce6310" containerID="4744d93f757062e772e2bad13de89f14714f329c6557f80834045603808a0be2" exitCode=0 Jan 31 04:08:25 crc kubenswrapper[4667]: I0131 04:08:25.363900 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c175848a-4645-42e7-8ccc-ab873e1ff7aa" path="/var/lib/kubelet/pods/c175848a-4645-42e7-8ccc-ab873e1ff7aa/volumes" Jan 31 04:08:25 crc kubenswrapper[4667]: I0131 04:08:25.365179 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-8bff87d99-j8cd2" Jan 31 04:08:25 crc kubenswrapper[4667]: I0131 04:08:25.365218 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-8bff87d99-j8cd2" Jan 31 04:08:25 crc kubenswrapper[4667]: I0131 04:08:25.365234 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7dc9f74cdf-w757n" event={"ID":"0de14766-3b67-45ce-a8d8-276f90ce6310","Type":"ContainerDied","Data":"4744d93f757062e772e2bad13de89f14714f329c6557f80834045603808a0be2"} Jan 31 04:08:25 crc kubenswrapper[4667]: I0131 04:08:25.365264 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-8bff87d99-j8cd2" event={"ID":"30fc5b26-45dd-42f8-9a58-7ba07c5aa56a","Type":"ContainerStarted","Data":"33a244eb2d3f6c1e48d63d8167527ae79ed6358820048e67020fda30a83a32d9"} Jan 31 04:08:25 crc kubenswrapper[4667]: I0131 04:08:25.365287 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-8bff87d99-j8cd2" event={"ID":"30fc5b26-45dd-42f8-9a58-7ba07c5aa56a","Type":"ContainerStarted","Data":"255716e7e28209c556077a137dec2b39bbbd16164642dd372f7c6c682ac3dba3"} Jan 31 04:08:25 crc kubenswrapper[4667]: I0131 04:08:25.365299 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"47cfcedd-9367-442f-b907-a8a1738e30b8","Type":"ContainerStarted","Data":"30633270b5a158264360dd3edb1d5d6cd533496dfe8bfef13eb7950907ac7181"} Jan 31 04:08:26 crc kubenswrapper[4667]: I0131 04:08:26.244413 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-8bff87d99-j8cd2" podStartSLOduration=4.244391148 podStartE2EDuration="4.244391148s" podCreationTimestamp="2026-01-31 04:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:08:25.340766482 +0000 UTC m=+1228.857101781" watchObservedRunningTime="2026-01-31 04:08:26.244391148 +0000 UTC m=+1229.760726437" Jan 31 04:08:26 crc kubenswrapper[4667]: I0131 04:08:26.254382 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:08:26 crc kubenswrapper[4667]: I0131 04:08:26.315406 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"47cfcedd-9367-442f-b907-a8a1738e30b8","Type":"ContainerStarted","Data":"da8e622f57beaa96c778bec445fb5caa3821fc1663844988d3b038c1fc2a241a"} Jan 31 04:08:27 crc kubenswrapper[4667]: I0131 04:08:27.328053 4667 generic.go:334] "Generic (PLEG): container finished" podID="0de14766-3b67-45ce-a8d8-276f90ce6310" containerID="d945eb277af255971ce21f9fbe29ebc3a76c0875c97b71da5a95149ba1c61844" exitCode=0 Jan 31 04:08:27 crc kubenswrapper[4667]: I0131 04:08:27.328109 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7dc9f74cdf-w757n" event={"ID":"0de14766-3b67-45ce-a8d8-276f90ce6310","Type":"ContainerDied","Data":"d945eb277af255971ce21f9fbe29ebc3a76c0875c97b71da5a95149ba1c61844"} Jan 31 04:08:28 crc kubenswrapper[4667]: I0131 04:08:28.346201 4667 generic.go:334] "Generic (PLEG): container finished" podID="29617aa3-f0e0-4528-9ba6-1385314227d9" containerID="77d88734b6ac59c22b44af3ed81aae7205d959245c4c52fd4bd26209dc3b501f" exitCode=137 Jan 31 04:08:28 crc kubenswrapper[4667]: I0131 04:08:28.346296 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"29617aa3-f0e0-4528-9ba6-1385314227d9","Type":"ContainerDied","Data":"77d88734b6ac59c22b44af3ed81aae7205d959245c4c52fd4bd26209dc3b501f"} Jan 31 04:08:29 crc kubenswrapper[4667]: I0131 04:08:29.767219 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="29617aa3-f0e0-4528-9ba6-1385314227d9" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.162:8776/healthcheck\": dial tcp 10.217.0.162:8776: connect: connection refused" Jan 31 04:08:32 crc kubenswrapper[4667]: E0131 04:08:32.210777 4667 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43842154_1666_491b_b37a_061c1a7c2b90.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23ada731_7288_4699_9ae2_d1bde47a02a2.slice/crio-8bf14bbfb0697cd8ca5db878235e1722f6a7221626113ed0a8b3441f92987685\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod847c3d86_2c1d_4b19_9558_5c03c65e1539.slice/crio-45ba0f5bf2e9664f1fa0c323e72ce0181803c9fa89cafd2f96e094f86680a277.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod847c3d86_2c1d_4b19_9558_5c03c65e1539.slice/crio-conmon-7c5e912076e31ea7ada41c76fb183375c6e23b3958824be4a2c20ea2962c8b42.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23ada731_7288_4699_9ae2_d1bde47a02a2.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod707e0230_af22_42d8_9d59_8ea928b3178c.slice/crio-6b1008f4cbdffc4d5bf5570b1211c92ccfe0a08346d0adb343500f1f74229778\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod707e0230_af22_42d8_9d59_8ea928b3178c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc175848a_4645_42e7_8ccc_ab873e1ff7aa.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43842154_1666_491b_b37a_061c1a7c2b90.slice/crio-15267c0b3933699e7d571eb96c51d917a3ecf248dccdd25d9641dad4133590cc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc175848a_4645_42e7_8ccc_ab873e1ff7aa.slice/crio-conmon-01eb1079afd2af8f3389078687891d730c75bad84b6638edecff44816261ec2e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod707e0230_af22_42d8_9d59_8ea928b3178c.slice/crio-conmon-85abfdaf0b5e2a34ed14e24152198b322f8ad1b18fc601fa4aa97cd296831c03.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc175848a_4645_42e7_8ccc_ab873e1ff7aa.slice/crio-01eb1079afd2af8f3389078687891d730c75bad84b6638edecff44816261ec2e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43842154_1666_491b_b37a_061c1a7c2b90.slice/crio-conmon-15267c0b3933699e7d571eb96c51d917a3ecf248dccdd25d9641dad4133590cc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29617aa3_f0e0_4528_9ba6_1385314227d9.slice/crio-conmon-77d88734b6ac59c22b44af3ed81aae7205d959245c4c52fd4bd26209dc3b501f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc175848a_4645_42e7_8ccc_ab873e1ff7aa.slice/crio-f4b53ed96e656c378650b2049ae52b412d6038c17fdb0792effdbccfc9a88b45\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0de14766_3b67_45ce_a8d8_276f90ce6310.slice/crio-conmon-4744d93f757062e772e2bad13de89f14714f329c6557f80834045603808a0be2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0de14766_3b67_45ce_a8d8_276f90ce6310.slice/crio-d945eb277af255971ce21f9fbe29ebc3a76c0875c97b71da5a95149ba1c61844.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29617aa3_f0e0_4528_9ba6_1385314227d9.slice/crio-77d88734b6ac59c22b44af3ed81aae7205d959245c4c52fd4bd26209dc3b501f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod847c3d86_2c1d_4b19_9558_5c03c65e1539.slice/crio-7c5e912076e31ea7ada41c76fb183375c6e23b3958824be4a2c20ea2962c8b42.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6974567_3bea_447a_bb8b_ced22b6d34ce.slice/crio-3fa239e2b62f1e7aacddff89f2ed28a743b788c82b3a5252236ff48d58158880.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43842154_1666_491b_b37a_061c1a7c2b90.slice/crio-d5792665b427db5423a942b5ae6e9824580cdc0be61cd232246d547cfa111570\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7f8fd18_06a0_432e_8c17_c9b432b6ca69.slice/crio-75959a94e1776a7025f344a57c090542bf63fb0615110c632e65e3a8c9188b18.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod847c3d86_2c1d_4b19_9558_5c03c65e1539.slice/crio-5abe44b586c1affd5ff29b87b7ae62a7cfc7581d04c5bc70078b4f6f38c68935\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod707e0230_af22_42d8_9d59_8ea928b3178c.slice/crio-conmon-a1afd66ffbade368b61e6fdee49de378fa084e0f40cadeac24baeaf05305f8cc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0de14766_3b67_45ce_a8d8_276f90ce6310.slice/crio-4744d93f757062e772e2bad13de89f14714f329c6557f80834045603808a0be2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod707e0230_af22_42d8_9d59_8ea928b3178c.slice/crio-85abfdaf0b5e2a34ed14e24152198b322f8ad1b18fc601fa4aa97cd296831c03.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0de14766_3b67_45ce_a8d8_276f90ce6310.slice/crio-conmon-d945eb277af255971ce21f9fbe29ebc3a76c0875c97b71da5a95149ba1c61844.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23ada731_7288_4699_9ae2_d1bde47a02a2.slice/crio-04031cafab9c8ee2081c1d44fa5555b5c1ede2c62553aa59a7e3863f5e2cb39e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod847c3d86_2c1d_4b19_9558_5c03c65e1539.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23ada731_7288_4699_9ae2_d1bde47a02a2.slice/crio-conmon-04031cafab9c8ee2081c1d44fa5555b5c1ede2c62553aa59a7e3863f5e2cb39e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6974567_3bea_447a_bb8b_ced22b6d34ce.slice/crio-conmon-3fa239e2b62f1e7aacddff89f2ed28a743b788c82b3a5252236ff48d58158880.scope\": RecentStats: unable to find data in memory cache]" Jan 31 04:08:32 crc kubenswrapper[4667]: I0131 04:08:32.396193 4667 generic.go:334] "Generic (PLEG): container finished" podID="b7f8fd18-06a0-432e-8c17-c9b432b6ca69" containerID="75959a94e1776a7025f344a57c090542bf63fb0615110c632e65e3a8c9188b18" exitCode=137 Jan 31 04:08:32 crc kubenswrapper[4667]: I0131 04:08:32.396261 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78789d8f44-5trmc" event={"ID":"b7f8fd18-06a0-432e-8c17-c9b432b6ca69","Type":"ContainerDied","Data":"75959a94e1776a7025f344a57c090542bf63fb0615110c632e65e3a8c9188b18"} Jan 31 04:08:32 crc kubenswrapper[4667]: I0131 04:08:32.404002 4667 generic.go:334] "Generic (PLEG): container finished" podID="c6974567-3bea-447a-bb8b-ced22b6d34ce" containerID="3fa239e2b62f1e7aacddff89f2ed28a743b788c82b3a5252236ff48d58158880" exitCode=137 Jan 31 04:08:32 crc kubenswrapper[4667]: I0131 04:08:32.404061 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86c748c4d6-2grmh" event={"ID":"c6974567-3bea-447a-bb8b-ced22b6d34ce","Type":"ContainerDied","Data":"3fa239e2b62f1e7aacddff89f2ed28a743b788c82b3a5252236ff48d58158880"} Jan 31 04:08:32 crc kubenswrapper[4667]: I0131 04:08:32.845805 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-8bff87d99-j8cd2" Jan 31 04:08:32 crc kubenswrapper[4667]: I0131 04:08:32.847029 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-8bff87d99-j8cd2" Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.069799 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.187783 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29617aa3-f0e0-4528-9ba6-1385314227d9-config-data-custom\") pod \"29617aa3-f0e0-4528-9ba6-1385314227d9\" (UID: \"29617aa3-f0e0-4528-9ba6-1385314227d9\") " Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.187919 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29617aa3-f0e0-4528-9ba6-1385314227d9-combined-ca-bundle\") pod \"29617aa3-f0e0-4528-9ba6-1385314227d9\" (UID: \"29617aa3-f0e0-4528-9ba6-1385314227d9\") " Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.188032 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29617aa3-f0e0-4528-9ba6-1385314227d9-etc-machine-id\") pod \"29617aa3-f0e0-4528-9ba6-1385314227d9\" (UID: \"29617aa3-f0e0-4528-9ba6-1385314227d9\") " Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.188163 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29617aa3-f0e0-4528-9ba6-1385314227d9-config-data\") pod \"29617aa3-f0e0-4528-9ba6-1385314227d9\" (UID: \"29617aa3-f0e0-4528-9ba6-1385314227d9\") " Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.188196 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29617aa3-f0e0-4528-9ba6-1385314227d9-logs\") pod \"29617aa3-f0e0-4528-9ba6-1385314227d9\" (UID: \"29617aa3-f0e0-4528-9ba6-1385314227d9\") " Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.188289 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29617aa3-f0e0-4528-9ba6-1385314227d9-scripts\") pod \"29617aa3-f0e0-4528-9ba6-1385314227d9\" (UID: \"29617aa3-f0e0-4528-9ba6-1385314227d9\") " Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.188327 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9c6r\" (UniqueName: \"kubernetes.io/projected/29617aa3-f0e0-4528-9ba6-1385314227d9-kube-api-access-l9c6r\") pod \"29617aa3-f0e0-4528-9ba6-1385314227d9\" (UID: \"29617aa3-f0e0-4528-9ba6-1385314227d9\") " Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.192154 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29617aa3-f0e0-4528-9ba6-1385314227d9-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "29617aa3-f0e0-4528-9ba6-1385314227d9" (UID: "29617aa3-f0e0-4528-9ba6-1385314227d9"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.193373 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29617aa3-f0e0-4528-9ba6-1385314227d9-logs" (OuterVolumeSpecName: "logs") pod "29617aa3-f0e0-4528-9ba6-1385314227d9" (UID: "29617aa3-f0e0-4528-9ba6-1385314227d9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.208734 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29617aa3-f0e0-4528-9ba6-1385314227d9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "29617aa3-f0e0-4528-9ba6-1385314227d9" (UID: "29617aa3-f0e0-4528-9ba6-1385314227d9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.226126 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29617aa3-f0e0-4528-9ba6-1385314227d9-kube-api-access-l9c6r" (OuterVolumeSpecName: "kube-api-access-l9c6r") pod "29617aa3-f0e0-4528-9ba6-1385314227d9" (UID: "29617aa3-f0e0-4528-9ba6-1385314227d9"). InnerVolumeSpecName "kube-api-access-l9c6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.237104 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29617aa3-f0e0-4528-9ba6-1385314227d9-scripts" (OuterVolumeSpecName: "scripts") pod "29617aa3-f0e0-4528-9ba6-1385314227d9" (UID: "29617aa3-f0e0-4528-9ba6-1385314227d9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.302404 4667 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29617aa3-f0e0-4528-9ba6-1385314227d9-logs\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.302443 4667 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29617aa3-f0e0-4528-9ba6-1385314227d9-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.302454 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9c6r\" (UniqueName: \"kubernetes.io/projected/29617aa3-f0e0-4528-9ba6-1385314227d9-kube-api-access-l9c6r\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.302464 4667 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29617aa3-f0e0-4528-9ba6-1385314227d9-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.302474 4667 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29617aa3-f0e0-4528-9ba6-1385314227d9-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.479136 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.599079 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29617aa3-f0e0-4528-9ba6-1385314227d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29617aa3-f0e0-4528-9ba6-1385314227d9" (UID: "29617aa3-f0e0-4528-9ba6-1385314227d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.612904 4667 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29617aa3-f0e0-4528-9ba6-1385314227d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.639161 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29617aa3-f0e0-4528-9ba6-1385314227d9-config-data" (OuterVolumeSpecName: "config-data") pod "29617aa3-f0e0-4528-9ba6-1385314227d9" (UID: "29617aa3-f0e0-4528-9ba6-1385314227d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.675252 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7dc9f74cdf-w757n" event={"ID":"0de14766-3b67-45ce-a8d8-276f90ce6310","Type":"ContainerDied","Data":"8bef120c683b565caf8d531bf458f0b78f43d9937fcef7710f6a86b2ad05d2e7"} Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.675323 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bef120c683b565caf8d531bf458f0b78f43d9937fcef7710f6a86b2ad05d2e7" Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.675357 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"29617aa3-f0e0-4528-9ba6-1385314227d9","Type":"ContainerDied","Data":"f124111ecc20f94e6d850fbf1512e16fa4beecee0c740f82f0103d8a68f1adb2"} Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.675378 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78789d8f44-5trmc" event={"ID":"b7f8fd18-06a0-432e-8c17-c9b432b6ca69","Type":"ContainerStarted","Data":"d51854ff784d64b2b3584b6cdda45491a29c7d1089ddf69708469cfc6e98fccc"} Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.675415 4667 scope.go:117] "RemoveContainer" containerID="77d88734b6ac59c22b44af3ed81aae7205d959245c4c52fd4bd26209dc3b501f" Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.711054 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7dc9f74cdf-w757n" Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.717167 4667 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29617aa3-f0e0-4528-9ba6-1385314227d9-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.761054 4667 scope.go:117] "RemoveContainer" containerID="209f03fa0ee39240735a4626e46b8eee5a7d4acbd72c037b10d1abe6f27f2cad" Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.818611 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0de14766-3b67-45ce-a8d8-276f90ce6310-ovndb-tls-certs\") pod \"0de14766-3b67-45ce-a8d8-276f90ce6310\" (UID: \"0de14766-3b67-45ce-a8d8-276f90ce6310\") " Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.818965 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0de14766-3b67-45ce-a8d8-276f90ce6310-combined-ca-bundle\") pod \"0de14766-3b67-45ce-a8d8-276f90ce6310\" (UID: \"0de14766-3b67-45ce-a8d8-276f90ce6310\") " Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.819100 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0de14766-3b67-45ce-a8d8-276f90ce6310-public-tls-certs\") pod \"0de14766-3b67-45ce-a8d8-276f90ce6310\" (UID: \"0de14766-3b67-45ce-a8d8-276f90ce6310\") " Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.819187 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0de14766-3b67-45ce-a8d8-276f90ce6310-internal-tls-certs\") pod \"0de14766-3b67-45ce-a8d8-276f90ce6310\" (UID: \"0de14766-3b67-45ce-a8d8-276f90ce6310\") " Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.819328 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0de14766-3b67-45ce-a8d8-276f90ce6310-config\") pod \"0de14766-3b67-45ce-a8d8-276f90ce6310\" (UID: \"0de14766-3b67-45ce-a8d8-276f90ce6310\") " Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.819359 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0de14766-3b67-45ce-a8d8-276f90ce6310-httpd-config\") pod \"0de14766-3b67-45ce-a8d8-276f90ce6310\" (UID: \"0de14766-3b67-45ce-a8d8-276f90ce6310\") " Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.819420 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfq4t\" (UniqueName: \"kubernetes.io/projected/0de14766-3b67-45ce-a8d8-276f90ce6310-kube-api-access-tfq4t\") pod \"0de14766-3b67-45ce-a8d8-276f90ce6310\" (UID: \"0de14766-3b67-45ce-a8d8-276f90ce6310\") " Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.838039 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0de14766-3b67-45ce-a8d8-276f90ce6310-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "0de14766-3b67-45ce-a8d8-276f90ce6310" (UID: "0de14766-3b67-45ce-a8d8-276f90ce6310"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.873236 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.877095 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0de14766-3b67-45ce-a8d8-276f90ce6310-kube-api-access-tfq4t" (OuterVolumeSpecName: "kube-api-access-tfq4t") pod "0de14766-3b67-45ce-a8d8-276f90ce6310" (UID: "0de14766-3b67-45ce-a8d8-276f90ce6310"). InnerVolumeSpecName "kube-api-access-tfq4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.882928 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.935430 4667 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0de14766-3b67-45ce-a8d8-276f90ce6310-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.935475 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfq4t\" (UniqueName: \"kubernetes.io/projected/0de14766-3b67-45ce-a8d8-276f90ce6310-kube-api-access-tfq4t\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.953753 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 31 04:08:35 crc kubenswrapper[4667]: E0131 04:08:35.954725 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29617aa3-f0e0-4528-9ba6-1385314227d9" containerName="cinder-api" Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.954747 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="29617aa3-f0e0-4528-9ba6-1385314227d9" containerName="cinder-api" Jan 31 04:08:35 crc kubenswrapper[4667]: E0131 04:08:35.954780 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0de14766-3b67-45ce-a8d8-276f90ce6310" containerName="neutron-api" Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.954788 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="0de14766-3b67-45ce-a8d8-276f90ce6310" containerName="neutron-api" Jan 31 04:08:35 crc kubenswrapper[4667]: E0131 04:08:35.954923 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0de14766-3b67-45ce-a8d8-276f90ce6310" containerName="neutron-httpd" Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.954933 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="0de14766-3b67-45ce-a8d8-276f90ce6310" containerName="neutron-httpd" Jan 31 04:08:35 crc kubenswrapper[4667]: E0131 04:08:35.954956 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29617aa3-f0e0-4528-9ba6-1385314227d9" containerName="cinder-api-log" Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.954985 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="29617aa3-f0e0-4528-9ba6-1385314227d9" containerName="cinder-api-log" Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.955374 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="0de14766-3b67-45ce-a8d8-276f90ce6310" containerName="neutron-api" Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.955392 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="0de14766-3b67-45ce-a8d8-276f90ce6310" containerName="neutron-httpd" Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.955414 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="29617aa3-f0e0-4528-9ba6-1385314227d9" containerName="cinder-api" Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.955470 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="29617aa3-f0e0-4528-9ba6-1385314227d9" containerName="cinder-api-log" Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.958910 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.969258 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.972748 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 31 04:08:35 crc kubenswrapper[4667]: I0131 04:08:35.985170 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.048987 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.052504 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10513551-238c-4a99-83c9-2992fb1bbaae-config-data\") pod \"cinder-api-0\" (UID: \"10513551-238c-4a99-83c9-2992fb1bbaae\") " pod="openstack/cinder-api-0" Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.052553 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10513551-238c-4a99-83c9-2992fb1bbaae-config-data-custom\") pod \"cinder-api-0\" (UID: \"10513551-238c-4a99-83c9-2992fb1bbaae\") " pod="openstack/cinder-api-0" Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.052586 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2mhv\" (UniqueName: \"kubernetes.io/projected/10513551-238c-4a99-83c9-2992fb1bbaae-kube-api-access-f2mhv\") pod \"cinder-api-0\" (UID: \"10513551-238c-4a99-83c9-2992fb1bbaae\") " pod="openstack/cinder-api-0" Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.052616 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10513551-238c-4a99-83c9-2992fb1bbaae-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"10513551-238c-4a99-83c9-2992fb1bbaae\") " pod="openstack/cinder-api-0" Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.052632 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/10513551-238c-4a99-83c9-2992fb1bbaae-public-tls-certs\") pod \"cinder-api-0\" (UID: \"10513551-238c-4a99-83c9-2992fb1bbaae\") " pod="openstack/cinder-api-0" Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.052670 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10513551-238c-4a99-83c9-2992fb1bbaae-scripts\") pod \"cinder-api-0\" (UID: \"10513551-238c-4a99-83c9-2992fb1bbaae\") " pod="openstack/cinder-api-0" Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.052685 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/10513551-238c-4a99-83c9-2992fb1bbaae-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"10513551-238c-4a99-83c9-2992fb1bbaae\") " pod="openstack/cinder-api-0" Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.052716 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10513551-238c-4a99-83c9-2992fb1bbaae-logs\") pod \"cinder-api-0\" (UID: \"10513551-238c-4a99-83c9-2992fb1bbaae\") " pod="openstack/cinder-api-0" Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.052737 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/10513551-238c-4a99-83c9-2992fb1bbaae-etc-machine-id\") pod \"cinder-api-0\" (UID: \"10513551-238c-4a99-83c9-2992fb1bbaae\") " pod="openstack/cinder-api-0" Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.067142 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0de14766-3b67-45ce-a8d8-276f90ce6310-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0de14766-3b67-45ce-a8d8-276f90ce6310" (UID: "0de14766-3b67-45ce-a8d8-276f90ce6310"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.086786 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0de14766-3b67-45ce-a8d8-276f90ce6310-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0de14766-3b67-45ce-a8d8-276f90ce6310" (UID: "0de14766-3b67-45ce-a8d8-276f90ce6310"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.097236 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0de14766-3b67-45ce-a8d8-276f90ce6310-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0de14766-3b67-45ce-a8d8-276f90ce6310" (UID: "0de14766-3b67-45ce-a8d8-276f90ce6310"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.099727 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0de14766-3b67-45ce-a8d8-276f90ce6310-config" (OuterVolumeSpecName: "config") pod "0de14766-3b67-45ce-a8d8-276f90ce6310" (UID: "0de14766-3b67-45ce-a8d8-276f90ce6310"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.133947 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0de14766-3b67-45ce-a8d8-276f90ce6310-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "0de14766-3b67-45ce-a8d8-276f90ce6310" (UID: "0de14766-3b67-45ce-a8d8-276f90ce6310"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.154584 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10513551-238c-4a99-83c9-2992fb1bbaae-config-data-custom\") pod \"cinder-api-0\" (UID: \"10513551-238c-4a99-83c9-2992fb1bbaae\") " pod="openstack/cinder-api-0" Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.154678 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2mhv\" (UniqueName: \"kubernetes.io/projected/10513551-238c-4a99-83c9-2992fb1bbaae-kube-api-access-f2mhv\") pod \"cinder-api-0\" (UID: \"10513551-238c-4a99-83c9-2992fb1bbaae\") " pod="openstack/cinder-api-0" Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.154728 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10513551-238c-4a99-83c9-2992fb1bbaae-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"10513551-238c-4a99-83c9-2992fb1bbaae\") " pod="openstack/cinder-api-0" Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.154757 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/10513551-238c-4a99-83c9-2992fb1bbaae-public-tls-certs\") pod \"cinder-api-0\" (UID: \"10513551-238c-4a99-83c9-2992fb1bbaae\") " pod="openstack/cinder-api-0" Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.154815 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10513551-238c-4a99-83c9-2992fb1bbaae-scripts\") pod \"cinder-api-0\" (UID: \"10513551-238c-4a99-83c9-2992fb1bbaae\") " pod="openstack/cinder-api-0" Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.154868 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/10513551-238c-4a99-83c9-2992fb1bbaae-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"10513551-238c-4a99-83c9-2992fb1bbaae\") " pod="openstack/cinder-api-0" Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.154910 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10513551-238c-4a99-83c9-2992fb1bbaae-logs\") pod \"cinder-api-0\" (UID: \"10513551-238c-4a99-83c9-2992fb1bbaae\") " pod="openstack/cinder-api-0" Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.154936 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/10513551-238c-4a99-83c9-2992fb1bbaae-etc-machine-id\") pod \"cinder-api-0\" (UID: \"10513551-238c-4a99-83c9-2992fb1bbaae\") " pod="openstack/cinder-api-0" Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.155085 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10513551-238c-4a99-83c9-2992fb1bbaae-config-data\") pod \"cinder-api-0\" (UID: \"10513551-238c-4a99-83c9-2992fb1bbaae\") " pod="openstack/cinder-api-0" Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.155159 4667 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0de14766-3b67-45ce-a8d8-276f90ce6310-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.155177 4667 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0de14766-3b67-45ce-a8d8-276f90ce6310-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.155192 4667 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0de14766-3b67-45ce-a8d8-276f90ce6310-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.155205 4667 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0de14766-3b67-45ce-a8d8-276f90ce6310-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.155222 4667 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0de14766-3b67-45ce-a8d8-276f90ce6310-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.155741 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10513551-238c-4a99-83c9-2992fb1bbaae-logs\") pod \"cinder-api-0\" (UID: \"10513551-238c-4a99-83c9-2992fb1bbaae\") " pod="openstack/cinder-api-0" Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.156267 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/10513551-238c-4a99-83c9-2992fb1bbaae-etc-machine-id\") pod \"cinder-api-0\" (UID: \"10513551-238c-4a99-83c9-2992fb1bbaae\") " pod="openstack/cinder-api-0" Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.160936 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10513551-238c-4a99-83c9-2992fb1bbaae-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"10513551-238c-4a99-83c9-2992fb1bbaae\") " pod="openstack/cinder-api-0" Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.162653 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10513551-238c-4a99-83c9-2992fb1bbaae-config-data-custom\") pod \"cinder-api-0\" (UID: \"10513551-238c-4a99-83c9-2992fb1bbaae\") " pod="openstack/cinder-api-0" Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.163094 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10513551-238c-4a99-83c9-2992fb1bbaae-scripts\") pod \"cinder-api-0\" (UID: \"10513551-238c-4a99-83c9-2992fb1bbaae\") " pod="openstack/cinder-api-0" Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.163273 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/10513551-238c-4a99-83c9-2992fb1bbaae-public-tls-certs\") pod \"cinder-api-0\" (UID: \"10513551-238c-4a99-83c9-2992fb1bbaae\") " pod="openstack/cinder-api-0" Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.163518 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/10513551-238c-4a99-83c9-2992fb1bbaae-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"10513551-238c-4a99-83c9-2992fb1bbaae\") " pod="openstack/cinder-api-0" Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.165610 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10513551-238c-4a99-83c9-2992fb1bbaae-config-data\") pod \"cinder-api-0\" (UID: \"10513551-238c-4a99-83c9-2992fb1bbaae\") " pod="openstack/cinder-api-0" Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.179308 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2mhv\" (UniqueName: \"kubernetes.io/projected/10513551-238c-4a99-83c9-2992fb1bbaae-kube-api-access-f2mhv\") pod \"cinder-api-0\" (UID: \"10513551-238c-4a99-83c9-2992fb1bbaae\") " pod="openstack/cinder-api-0" Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.299614 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.570826 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86c748c4d6-2grmh" event={"ID":"c6974567-3bea-447a-bb8b-ced22b6d34ce","Type":"ContainerStarted","Data":"8585ef04e351d14473c07be1275ec2c6840212275304d32bbdccbfc70cb910c8"} Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.576720 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c47c09d9-21e3-4c10-936f-0d679cf6a8f1","Type":"ContainerStarted","Data":"16e930cefbc5c0dcf939a5f060680a4c17b1a2ad74f9adba61edc81061af69eb"} Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.582936 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"47cfcedd-9367-442f-b907-a8a1738e30b8","Type":"ContainerStarted","Data":"76f153880c0f656f39bbba0f0256274ac42b948f47073e9118f0d3ac792d38e3"} Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.583956 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7dc9f74cdf-w757n" Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.683910 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.305961944 podStartE2EDuration="21.683881759s" podCreationTimestamp="2026-01-31 04:08:15 +0000 UTC" firstStartedPulling="2026-01-31 04:08:16.410275113 +0000 UTC m=+1219.926610412" lastFinishedPulling="2026-01-31 04:08:34.788194928 +0000 UTC m=+1238.304530227" observedRunningTime="2026-01-31 04:08:36.633399059 +0000 UTC m=+1240.149734358" watchObservedRunningTime="2026-01-31 04:08:36.683881759 +0000 UTC m=+1240.200217058" Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.702426 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7dc9f74cdf-w757n"] Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.710334 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7dc9f74cdf-w757n"] Jan 31 04:08:36 crc kubenswrapper[4667]: I0131 04:08:36.796172 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 31 04:08:37 crc kubenswrapper[4667]: I0131 04:08:37.300395 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0de14766-3b67-45ce-a8d8-276f90ce6310" path="/var/lib/kubelet/pods/0de14766-3b67-45ce-a8d8-276f90ce6310/volumes" Jan 31 04:08:37 crc kubenswrapper[4667]: I0131 04:08:37.301686 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29617aa3-f0e0-4528-9ba6-1385314227d9" path="/var/lib/kubelet/pods/29617aa3-f0e0-4528-9ba6-1385314227d9/volumes" Jan 31 04:08:37 crc kubenswrapper[4667]: I0131 04:08:37.599878 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"47cfcedd-9367-442f-b907-a8a1738e30b8","Type":"ContainerStarted","Data":"969b2e32747a82dd8661083013695427f244507a3c6d04d865d04193a46e57e1"} Jan 31 04:08:37 crc kubenswrapper[4667]: I0131 04:08:37.606378 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"10513551-238c-4a99-83c9-2992fb1bbaae","Type":"ContainerStarted","Data":"468a3cdc9c2488ef89a2c7565feeb0b174733aff0182b3631db96fd83a240723"} Jan 31 04:08:37 crc kubenswrapper[4667]: I0131 04:08:37.606457 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"10513551-238c-4a99-83c9-2992fb1bbaae","Type":"ContainerStarted","Data":"c9f35b0905e6530ef080a82178eca37e7cd193d1c80c014ed7dff41542250647"} Jan 31 04:08:39 crc kubenswrapper[4667]: I0131 04:08:39.627451 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"10513551-238c-4a99-83c9-2992fb1bbaae","Type":"ContainerStarted","Data":"dbbadfffc3c3abf4f7ce058c7ea171f0787a1ea259d6397424d75bd7243a4719"} Jan 31 04:08:39 crc kubenswrapper[4667]: I0131 04:08:39.628287 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 31 04:08:39 crc kubenswrapper[4667]: I0131 04:08:39.656547 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.656526739 podStartE2EDuration="4.656526739s" podCreationTimestamp="2026-01-31 04:08:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:08:39.654822823 +0000 UTC m=+1243.171158122" watchObservedRunningTime="2026-01-31 04:08:39.656526739 +0000 UTC m=+1243.172862038" Jan 31 04:08:39 crc kubenswrapper[4667]: I0131 04:08:39.777119 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="29617aa3-f0e0-4528-9ba6-1385314227d9" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.162:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 04:08:41 crc kubenswrapper[4667]: I0131 04:08:41.649094 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"47cfcedd-9367-442f-b907-a8a1738e30b8","Type":"ContainerStarted","Data":"e44269cf5ceeeafaf01a93e9e61e7bf3930bbb2e195057022bb61d9f00188ee4"} Jan 31 04:08:41 crc kubenswrapper[4667]: I0131 04:08:41.649263 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="47cfcedd-9367-442f-b907-a8a1738e30b8" containerName="ceilometer-central-agent" containerID="cri-o://da8e622f57beaa96c778bec445fb5caa3821fc1663844988d3b038c1fc2a241a" gracePeriod=30 Jan 31 04:08:41 crc kubenswrapper[4667]: I0131 04:08:41.649858 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="47cfcedd-9367-442f-b907-a8a1738e30b8" containerName="proxy-httpd" containerID="cri-o://e44269cf5ceeeafaf01a93e9e61e7bf3930bbb2e195057022bb61d9f00188ee4" gracePeriod=30 Jan 31 04:08:41 crc kubenswrapper[4667]: I0131 04:08:41.649886 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="47cfcedd-9367-442f-b907-a8a1738e30b8" containerName="ceilometer-notification-agent" containerID="cri-o://76f153880c0f656f39bbba0f0256274ac42b948f47073e9118f0d3ac792d38e3" gracePeriod=30 Jan 31 04:08:41 crc kubenswrapper[4667]: I0131 04:08:41.649944 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="47cfcedd-9367-442f-b907-a8a1738e30b8" containerName="sg-core" containerID="cri-o://969b2e32747a82dd8661083013695427f244507a3c6d04d865d04193a46e57e1" gracePeriod=30 Jan 31 04:08:41 crc kubenswrapper[4667]: I0131 04:08:41.649967 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 04:08:41 crc kubenswrapper[4667]: I0131 04:08:41.691104 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.868135826 podStartE2EDuration="18.691085067s" podCreationTimestamp="2026-01-31 04:08:23 +0000 UTC" firstStartedPulling="2026-01-31 04:08:24.615914725 +0000 UTC m=+1228.132250024" lastFinishedPulling="2026-01-31 04:08:40.438863966 +0000 UTC m=+1243.955199265" observedRunningTime="2026-01-31 04:08:41.688515129 +0000 UTC m=+1245.204850428" watchObservedRunningTime="2026-01-31 04:08:41.691085067 +0000 UTC m=+1245.207420366" Jan 31 04:08:41 crc kubenswrapper[4667]: I0131 04:08:41.755628 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-78789d8f44-5trmc" Jan 31 04:08:41 crc kubenswrapper[4667]: I0131 04:08:41.756078 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-78789d8f44-5trmc" Jan 31 04:08:41 crc kubenswrapper[4667]: I0131 04:08:41.843916 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-86c748c4d6-2grmh" Jan 31 04:08:41 crc kubenswrapper[4667]: I0131 04:08:41.843975 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-86c748c4d6-2grmh" Jan 31 04:08:42 crc kubenswrapper[4667]: I0131 04:08:42.515241 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5f87b7b68-pjkwf" Jan 31 04:08:42 crc kubenswrapper[4667]: I0131 04:08:42.517751 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5f87b7b68-pjkwf" Jan 31 04:08:42 crc kubenswrapper[4667]: I0131 04:08:42.684671 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7f96457d78-wrrfr"] Jan 31 04:08:42 crc kubenswrapper[4667]: I0131 04:08:42.685574 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7f96457d78-wrrfr" podUID="55398def-7876-49e4-9509-29374a5f9321" containerName="placement-log" containerID="cri-o://43f2c08ab8c7d46bfc3dcd7a50c24313f1a272150df709c5ef0da7ed110d9159" gracePeriod=30 Jan 31 04:08:42 crc kubenswrapper[4667]: I0131 04:08:42.686529 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7f96457d78-wrrfr" podUID="55398def-7876-49e4-9509-29374a5f9321" containerName="placement-api" containerID="cri-o://93c9f813d28740047f5f15de7f6fe09cb14534a8f751e97820605552bd6ca35c" gracePeriod=30 Jan 31 04:08:42 crc kubenswrapper[4667]: I0131 04:08:42.722001 4667 generic.go:334] "Generic (PLEG): container finished" podID="47cfcedd-9367-442f-b907-a8a1738e30b8" containerID="e44269cf5ceeeafaf01a93e9e61e7bf3930bbb2e195057022bb61d9f00188ee4" exitCode=0 Jan 31 04:08:42 crc kubenswrapper[4667]: I0131 04:08:42.722314 4667 generic.go:334] "Generic (PLEG): container finished" podID="47cfcedd-9367-442f-b907-a8a1738e30b8" containerID="969b2e32747a82dd8661083013695427f244507a3c6d04d865d04193a46e57e1" exitCode=2 Jan 31 04:08:42 crc kubenswrapper[4667]: I0131 04:08:42.722384 4667 generic.go:334] "Generic (PLEG): container finished" podID="47cfcedd-9367-442f-b907-a8a1738e30b8" containerID="76f153880c0f656f39bbba0f0256274ac42b948f47073e9118f0d3ac792d38e3" exitCode=0 Jan 31 04:08:42 crc kubenswrapper[4667]: I0131 04:08:42.722442 4667 generic.go:334] "Generic (PLEG): container finished" podID="47cfcedd-9367-442f-b907-a8a1738e30b8" containerID="da8e622f57beaa96c778bec445fb5caa3821fc1663844988d3b038c1fc2a241a" exitCode=0 Jan 31 04:08:42 crc kubenswrapper[4667]: I0131 04:08:42.723709 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"47cfcedd-9367-442f-b907-a8a1738e30b8","Type":"ContainerDied","Data":"e44269cf5ceeeafaf01a93e9e61e7bf3930bbb2e195057022bb61d9f00188ee4"} Jan 31 04:08:42 crc kubenswrapper[4667]: I0131 04:08:42.738642 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"47cfcedd-9367-442f-b907-a8a1738e30b8","Type":"ContainerDied","Data":"969b2e32747a82dd8661083013695427f244507a3c6d04d865d04193a46e57e1"} Jan 31 04:08:42 crc kubenswrapper[4667]: I0131 04:08:42.738982 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"47cfcedd-9367-442f-b907-a8a1738e30b8","Type":"ContainerDied","Data":"76f153880c0f656f39bbba0f0256274ac42b948f47073e9118f0d3ac792d38e3"} Jan 31 04:08:42 crc kubenswrapper[4667]: I0131 04:08:42.739058 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"47cfcedd-9367-442f-b907-a8a1738e30b8","Type":"ContainerDied","Data":"da8e622f57beaa96c778bec445fb5caa3821fc1663844988d3b038c1fc2a241a"} Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.345004 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.447303 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7xpv\" (UniqueName: \"kubernetes.io/projected/47cfcedd-9367-442f-b907-a8a1738e30b8-kube-api-access-d7xpv\") pod \"47cfcedd-9367-442f-b907-a8a1738e30b8\" (UID: \"47cfcedd-9367-442f-b907-a8a1738e30b8\") " Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.447387 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/47cfcedd-9367-442f-b907-a8a1738e30b8-run-httpd\") pod \"47cfcedd-9367-442f-b907-a8a1738e30b8\" (UID: \"47cfcedd-9367-442f-b907-a8a1738e30b8\") " Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.447468 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47cfcedd-9367-442f-b907-a8a1738e30b8-scripts\") pod \"47cfcedd-9367-442f-b907-a8a1738e30b8\" (UID: \"47cfcedd-9367-442f-b907-a8a1738e30b8\") " Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.447505 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47cfcedd-9367-442f-b907-a8a1738e30b8-config-data\") pod \"47cfcedd-9367-442f-b907-a8a1738e30b8\" (UID: \"47cfcedd-9367-442f-b907-a8a1738e30b8\") " Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.447652 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/47cfcedd-9367-442f-b907-a8a1738e30b8-log-httpd\") pod \"47cfcedd-9367-442f-b907-a8a1738e30b8\" (UID: \"47cfcedd-9367-442f-b907-a8a1738e30b8\") " Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.447755 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47cfcedd-9367-442f-b907-a8a1738e30b8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "47cfcedd-9367-442f-b907-a8a1738e30b8" (UID: "47cfcedd-9367-442f-b907-a8a1738e30b8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.447797 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/47cfcedd-9367-442f-b907-a8a1738e30b8-sg-core-conf-yaml\") pod \"47cfcedd-9367-442f-b907-a8a1738e30b8\" (UID: \"47cfcedd-9367-442f-b907-a8a1738e30b8\") " Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.447964 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47cfcedd-9367-442f-b907-a8a1738e30b8-combined-ca-bundle\") pod \"47cfcedd-9367-442f-b907-a8a1738e30b8\" (UID: \"47cfcedd-9367-442f-b907-a8a1738e30b8\") " Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.448201 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47cfcedd-9367-442f-b907-a8a1738e30b8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "47cfcedd-9367-442f-b907-a8a1738e30b8" (UID: "47cfcedd-9367-442f-b907-a8a1738e30b8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.448701 4667 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/47cfcedd-9367-442f-b907-a8a1738e30b8-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.448738 4667 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/47cfcedd-9367-442f-b907-a8a1738e30b8-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.458053 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47cfcedd-9367-442f-b907-a8a1738e30b8-scripts" (OuterVolumeSpecName: "scripts") pod "47cfcedd-9367-442f-b907-a8a1738e30b8" (UID: "47cfcedd-9367-442f-b907-a8a1738e30b8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.460017 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47cfcedd-9367-442f-b907-a8a1738e30b8-kube-api-access-d7xpv" (OuterVolumeSpecName: "kube-api-access-d7xpv") pod "47cfcedd-9367-442f-b907-a8a1738e30b8" (UID: "47cfcedd-9367-442f-b907-a8a1738e30b8"). InnerVolumeSpecName "kube-api-access-d7xpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.499054 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47cfcedd-9367-442f-b907-a8a1738e30b8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "47cfcedd-9367-442f-b907-a8a1738e30b8" (UID: "47cfcedd-9367-442f-b907-a8a1738e30b8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.552132 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7xpv\" (UniqueName: \"kubernetes.io/projected/47cfcedd-9367-442f-b907-a8a1738e30b8-kube-api-access-d7xpv\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.552168 4667 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47cfcedd-9367-442f-b907-a8a1738e30b8-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.552179 4667 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/47cfcedd-9367-442f-b907-a8a1738e30b8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.563105 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47cfcedd-9367-442f-b907-a8a1738e30b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "47cfcedd-9367-442f-b907-a8a1738e30b8" (UID: "47cfcedd-9367-442f-b907-a8a1738e30b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.605140 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47cfcedd-9367-442f-b907-a8a1738e30b8-config-data" (OuterVolumeSpecName: "config-data") pod "47cfcedd-9367-442f-b907-a8a1738e30b8" (UID: "47cfcedd-9367-442f-b907-a8a1738e30b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.654470 4667 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47cfcedd-9367-442f-b907-a8a1738e30b8-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.654510 4667 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47cfcedd-9367-442f-b907-a8a1738e30b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.735824 4667 generic.go:334] "Generic (PLEG): container finished" podID="55398def-7876-49e4-9509-29374a5f9321" containerID="43f2c08ab8c7d46bfc3dcd7a50c24313f1a272150df709c5ef0da7ed110d9159" exitCode=143 Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.735927 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f96457d78-wrrfr" event={"ID":"55398def-7876-49e4-9509-29374a5f9321","Type":"ContainerDied","Data":"43f2c08ab8c7d46bfc3dcd7a50c24313f1a272150df709c5ef0da7ed110d9159"} Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.738585 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"47cfcedd-9367-442f-b907-a8a1738e30b8","Type":"ContainerDied","Data":"30633270b5a158264360dd3edb1d5d6cd533496dfe8bfef13eb7950907ac7181"} Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.738627 4667 scope.go:117] "RemoveContainer" containerID="e44269cf5ceeeafaf01a93e9e61e7bf3930bbb2e195057022bb61d9f00188ee4" Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.738796 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.768815 4667 scope.go:117] "RemoveContainer" containerID="969b2e32747a82dd8661083013695427f244507a3c6d04d865d04193a46e57e1" Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.793902 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.812877 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.820433 4667 scope.go:117] "RemoveContainer" containerID="76f153880c0f656f39bbba0f0256274ac42b948f47073e9118f0d3ac792d38e3" Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.833633 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:08:43 crc kubenswrapper[4667]: E0131 04:08:43.834247 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47cfcedd-9367-442f-b907-a8a1738e30b8" containerName="ceilometer-central-agent" Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.834271 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="47cfcedd-9367-442f-b907-a8a1738e30b8" containerName="ceilometer-central-agent" Jan 31 04:08:43 crc kubenswrapper[4667]: E0131 04:08:43.834303 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47cfcedd-9367-442f-b907-a8a1738e30b8" containerName="ceilometer-notification-agent" Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.834311 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="47cfcedd-9367-442f-b907-a8a1738e30b8" containerName="ceilometer-notification-agent" Jan 31 04:08:43 crc kubenswrapper[4667]: E0131 04:08:43.834327 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47cfcedd-9367-442f-b907-a8a1738e30b8" containerName="sg-core" Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.834345 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="47cfcedd-9367-442f-b907-a8a1738e30b8" containerName="sg-core" Jan 31 04:08:43 crc kubenswrapper[4667]: E0131 04:08:43.834363 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47cfcedd-9367-442f-b907-a8a1738e30b8" containerName="proxy-httpd" Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.834371 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="47cfcedd-9367-442f-b907-a8a1738e30b8" containerName="proxy-httpd" Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.834553 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="47cfcedd-9367-442f-b907-a8a1738e30b8" containerName="proxy-httpd" Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.834569 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="47cfcedd-9367-442f-b907-a8a1738e30b8" containerName="sg-core" Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.834583 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="47cfcedd-9367-442f-b907-a8a1738e30b8" containerName="ceilometer-central-agent" Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.834594 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="47cfcedd-9367-442f-b907-a8a1738e30b8" containerName="ceilometer-notification-agent" Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.839260 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.845187 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.845549 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.867459 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.903358 4667 scope.go:117] "RemoveContainer" containerID="da8e622f57beaa96c778bec445fb5caa3821fc1663844988d3b038c1fc2a241a" Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.962582 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e49e208-fc35-469b-a53d-3c8b392a6bc7-log-httpd\") pod \"ceilometer-0\" (UID: \"8e49e208-fc35-469b-a53d-3c8b392a6bc7\") " pod="openstack/ceilometer-0" Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.962637 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e49e208-fc35-469b-a53d-3c8b392a6bc7-config-data\") pod \"ceilometer-0\" (UID: \"8e49e208-fc35-469b-a53d-3c8b392a6bc7\") " pod="openstack/ceilometer-0" Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.962705 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e49e208-fc35-469b-a53d-3c8b392a6bc7-scripts\") pod \"ceilometer-0\" (UID: \"8e49e208-fc35-469b-a53d-3c8b392a6bc7\") " pod="openstack/ceilometer-0" Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.962746 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8e49e208-fc35-469b-a53d-3c8b392a6bc7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8e49e208-fc35-469b-a53d-3c8b392a6bc7\") " pod="openstack/ceilometer-0" Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.962782 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e49e208-fc35-469b-a53d-3c8b392a6bc7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8e49e208-fc35-469b-a53d-3c8b392a6bc7\") " pod="openstack/ceilometer-0" Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.962810 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px26d\" (UniqueName: \"kubernetes.io/projected/8e49e208-fc35-469b-a53d-3c8b392a6bc7-kube-api-access-px26d\") pod \"ceilometer-0\" (UID: \"8e49e208-fc35-469b-a53d-3c8b392a6bc7\") " pod="openstack/ceilometer-0" Jan 31 04:08:43 crc kubenswrapper[4667]: I0131 04:08:43.962859 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e49e208-fc35-469b-a53d-3c8b392a6bc7-run-httpd\") pod \"ceilometer-0\" (UID: \"8e49e208-fc35-469b-a53d-3c8b392a6bc7\") " pod="openstack/ceilometer-0" Jan 31 04:08:44 crc kubenswrapper[4667]: I0131 04:08:44.064373 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e49e208-fc35-469b-a53d-3c8b392a6bc7-run-httpd\") pod \"ceilometer-0\" (UID: \"8e49e208-fc35-469b-a53d-3c8b392a6bc7\") " pod="openstack/ceilometer-0" Jan 31 04:08:44 crc kubenswrapper[4667]: I0131 04:08:44.065147 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e49e208-fc35-469b-a53d-3c8b392a6bc7-log-httpd\") pod \"ceilometer-0\" (UID: \"8e49e208-fc35-469b-a53d-3c8b392a6bc7\") " pod="openstack/ceilometer-0" Jan 31 04:08:44 crc kubenswrapper[4667]: I0131 04:08:44.065483 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e49e208-fc35-469b-a53d-3c8b392a6bc7-config-data\") pod \"ceilometer-0\" (UID: \"8e49e208-fc35-469b-a53d-3c8b392a6bc7\") " pod="openstack/ceilometer-0" Jan 31 04:08:44 crc kubenswrapper[4667]: I0131 04:08:44.066314 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e49e208-fc35-469b-a53d-3c8b392a6bc7-scripts\") pod \"ceilometer-0\" (UID: \"8e49e208-fc35-469b-a53d-3c8b392a6bc7\") " pod="openstack/ceilometer-0" Jan 31 04:08:44 crc kubenswrapper[4667]: I0131 04:08:44.065059 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e49e208-fc35-469b-a53d-3c8b392a6bc7-run-httpd\") pod \"ceilometer-0\" (UID: \"8e49e208-fc35-469b-a53d-3c8b392a6bc7\") " pod="openstack/ceilometer-0" Jan 31 04:08:44 crc kubenswrapper[4667]: I0131 04:08:44.065433 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e49e208-fc35-469b-a53d-3c8b392a6bc7-log-httpd\") pod \"ceilometer-0\" (UID: \"8e49e208-fc35-469b-a53d-3c8b392a6bc7\") " pod="openstack/ceilometer-0" Jan 31 04:08:44 crc kubenswrapper[4667]: I0131 04:08:44.066725 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8e49e208-fc35-469b-a53d-3c8b392a6bc7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8e49e208-fc35-469b-a53d-3c8b392a6bc7\") " pod="openstack/ceilometer-0" Jan 31 04:08:44 crc kubenswrapper[4667]: I0131 04:08:44.066976 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e49e208-fc35-469b-a53d-3c8b392a6bc7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8e49e208-fc35-469b-a53d-3c8b392a6bc7\") " pod="openstack/ceilometer-0" Jan 31 04:08:44 crc kubenswrapper[4667]: I0131 04:08:44.067225 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px26d\" (UniqueName: \"kubernetes.io/projected/8e49e208-fc35-469b-a53d-3c8b392a6bc7-kube-api-access-px26d\") pod \"ceilometer-0\" (UID: \"8e49e208-fc35-469b-a53d-3c8b392a6bc7\") " pod="openstack/ceilometer-0" Jan 31 04:08:44 crc kubenswrapper[4667]: I0131 04:08:44.072093 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e49e208-fc35-469b-a53d-3c8b392a6bc7-config-data\") pod \"ceilometer-0\" (UID: \"8e49e208-fc35-469b-a53d-3c8b392a6bc7\") " pod="openstack/ceilometer-0" Jan 31 04:08:44 crc kubenswrapper[4667]: I0131 04:08:44.084379 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8e49e208-fc35-469b-a53d-3c8b392a6bc7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8e49e208-fc35-469b-a53d-3c8b392a6bc7\") " pod="openstack/ceilometer-0" Jan 31 04:08:44 crc kubenswrapper[4667]: I0131 04:08:44.084396 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e49e208-fc35-469b-a53d-3c8b392a6bc7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8e49e208-fc35-469b-a53d-3c8b392a6bc7\") " pod="openstack/ceilometer-0" Jan 31 04:08:44 crc kubenswrapper[4667]: I0131 04:08:44.093091 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px26d\" (UniqueName: \"kubernetes.io/projected/8e49e208-fc35-469b-a53d-3c8b392a6bc7-kube-api-access-px26d\") pod \"ceilometer-0\" (UID: \"8e49e208-fc35-469b-a53d-3c8b392a6bc7\") " pod="openstack/ceilometer-0" Jan 31 04:08:44 crc kubenswrapper[4667]: I0131 04:08:44.098156 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e49e208-fc35-469b-a53d-3c8b392a6bc7-scripts\") pod \"ceilometer-0\" (UID: \"8e49e208-fc35-469b-a53d-3c8b392a6bc7\") " pod="openstack/ceilometer-0" Jan 31 04:08:44 crc kubenswrapper[4667]: I0131 04:08:44.192157 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:08:44 crc kubenswrapper[4667]: I0131 04:08:44.406947 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:08:44 crc kubenswrapper[4667]: I0131 04:08:44.767286 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:08:45 crc kubenswrapper[4667]: I0131 04:08:45.297987 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47cfcedd-9367-442f-b907-a8a1738e30b8" path="/var/lib/kubelet/pods/47cfcedd-9367-442f-b907-a8a1738e30b8/volumes" Jan 31 04:08:45 crc kubenswrapper[4667]: I0131 04:08:45.704130 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:08:45 crc kubenswrapper[4667]: I0131 04:08:45.704193 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:08:45 crc kubenswrapper[4667]: I0131 04:08:45.758696 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e49e208-fc35-469b-a53d-3c8b392a6bc7","Type":"ContainerStarted","Data":"6c3e380cad194af4de2378c9721605703d0c9c34bdbebc6536e87be7d7e44e9f"} Jan 31 04:08:46 crc kubenswrapper[4667]: I0131 04:08:46.726710 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7f96457d78-wrrfr" Jan 31 04:08:46 crc kubenswrapper[4667]: I0131 04:08:46.770581 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e49e208-fc35-469b-a53d-3c8b392a6bc7","Type":"ContainerStarted","Data":"7e4959f43922ea44045e67b50343a5ffd986dcfde1ed6571be2a9b98b14e3896"} Jan 31 04:08:46 crc kubenswrapper[4667]: I0131 04:08:46.772343 4667 generic.go:334] "Generic (PLEG): container finished" podID="55398def-7876-49e4-9509-29374a5f9321" containerID="93c9f813d28740047f5f15de7f6fe09cb14534a8f751e97820605552bd6ca35c" exitCode=0 Jan 31 04:08:46 crc kubenswrapper[4667]: I0131 04:08:46.772377 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f96457d78-wrrfr" event={"ID":"55398def-7876-49e4-9509-29374a5f9321","Type":"ContainerDied","Data":"93c9f813d28740047f5f15de7f6fe09cb14534a8f751e97820605552bd6ca35c"} Jan 31 04:08:46 crc kubenswrapper[4667]: I0131 04:08:46.772397 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7f96457d78-wrrfr" event={"ID":"55398def-7876-49e4-9509-29374a5f9321","Type":"ContainerDied","Data":"49c0a3d00150cd561fe37c24f2eacf950be54cda5c096d6630d6856a10036fed"} Jan 31 04:08:46 crc kubenswrapper[4667]: I0131 04:08:46.772418 4667 scope.go:117] "RemoveContainer" containerID="93c9f813d28740047f5f15de7f6fe09cb14534a8f751e97820605552bd6ca35c" Jan 31 04:08:46 crc kubenswrapper[4667]: I0131 04:08:46.772568 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7f96457d78-wrrfr" Jan 31 04:08:46 crc kubenswrapper[4667]: I0131 04:08:46.844609 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsd52\" (UniqueName: \"kubernetes.io/projected/55398def-7876-49e4-9509-29374a5f9321-kube-api-access-bsd52\") pod \"55398def-7876-49e4-9509-29374a5f9321\" (UID: \"55398def-7876-49e4-9509-29374a5f9321\") " Jan 31 04:08:46 crc kubenswrapper[4667]: I0131 04:08:46.844831 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55398def-7876-49e4-9509-29374a5f9321-internal-tls-certs\") pod \"55398def-7876-49e4-9509-29374a5f9321\" (UID: \"55398def-7876-49e4-9509-29374a5f9321\") " Jan 31 04:08:46 crc kubenswrapper[4667]: I0131 04:08:46.844925 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55398def-7876-49e4-9509-29374a5f9321-logs\") pod \"55398def-7876-49e4-9509-29374a5f9321\" (UID: \"55398def-7876-49e4-9509-29374a5f9321\") " Jan 31 04:08:46 crc kubenswrapper[4667]: I0131 04:08:46.844963 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55398def-7876-49e4-9509-29374a5f9321-combined-ca-bundle\") pod \"55398def-7876-49e4-9509-29374a5f9321\" (UID: \"55398def-7876-49e4-9509-29374a5f9321\") " Jan 31 04:08:46 crc kubenswrapper[4667]: I0131 04:08:46.844987 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55398def-7876-49e4-9509-29374a5f9321-scripts\") pod \"55398def-7876-49e4-9509-29374a5f9321\" (UID: \"55398def-7876-49e4-9509-29374a5f9321\") " Jan 31 04:08:46 crc kubenswrapper[4667]: I0131 04:08:46.845039 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55398def-7876-49e4-9509-29374a5f9321-config-data\") pod \"55398def-7876-49e4-9509-29374a5f9321\" (UID: \"55398def-7876-49e4-9509-29374a5f9321\") " Jan 31 04:08:46 crc kubenswrapper[4667]: I0131 04:08:46.845077 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55398def-7876-49e4-9509-29374a5f9321-public-tls-certs\") pod \"55398def-7876-49e4-9509-29374a5f9321\" (UID: \"55398def-7876-49e4-9509-29374a5f9321\") " Jan 31 04:08:46 crc kubenswrapper[4667]: I0131 04:08:46.854785 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55398def-7876-49e4-9509-29374a5f9321-logs" (OuterVolumeSpecName: "logs") pod "55398def-7876-49e4-9509-29374a5f9321" (UID: "55398def-7876-49e4-9509-29374a5f9321"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:08:46 crc kubenswrapper[4667]: I0131 04:08:46.867816 4667 scope.go:117] "RemoveContainer" containerID="43f2c08ab8c7d46bfc3dcd7a50c24313f1a272150df709c5ef0da7ed110d9159" Jan 31 04:08:46 crc kubenswrapper[4667]: I0131 04:08:46.891174 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55398def-7876-49e4-9509-29374a5f9321-kube-api-access-bsd52" (OuterVolumeSpecName: "kube-api-access-bsd52") pod "55398def-7876-49e4-9509-29374a5f9321" (UID: "55398def-7876-49e4-9509-29374a5f9321"). InnerVolumeSpecName "kube-api-access-bsd52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:08:46 crc kubenswrapper[4667]: I0131 04:08:46.906463 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55398def-7876-49e4-9509-29374a5f9321-scripts" (OuterVolumeSpecName: "scripts") pod "55398def-7876-49e4-9509-29374a5f9321" (UID: "55398def-7876-49e4-9509-29374a5f9321"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:46 crc kubenswrapper[4667]: I0131 04:08:46.922222 4667 scope.go:117] "RemoveContainer" containerID="93c9f813d28740047f5f15de7f6fe09cb14534a8f751e97820605552bd6ca35c" Jan 31 04:08:46 crc kubenswrapper[4667]: E0131 04:08:46.922878 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93c9f813d28740047f5f15de7f6fe09cb14534a8f751e97820605552bd6ca35c\": container with ID starting with 93c9f813d28740047f5f15de7f6fe09cb14534a8f751e97820605552bd6ca35c not found: ID does not exist" containerID="93c9f813d28740047f5f15de7f6fe09cb14534a8f751e97820605552bd6ca35c" Jan 31 04:08:46 crc kubenswrapper[4667]: I0131 04:08:46.922998 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93c9f813d28740047f5f15de7f6fe09cb14534a8f751e97820605552bd6ca35c"} err="failed to get container status \"93c9f813d28740047f5f15de7f6fe09cb14534a8f751e97820605552bd6ca35c\": rpc error: code = NotFound desc = could not find container \"93c9f813d28740047f5f15de7f6fe09cb14534a8f751e97820605552bd6ca35c\": container with ID starting with 93c9f813d28740047f5f15de7f6fe09cb14534a8f751e97820605552bd6ca35c not found: ID does not exist" Jan 31 04:08:46 crc kubenswrapper[4667]: I0131 04:08:46.923098 4667 scope.go:117] "RemoveContainer" containerID="43f2c08ab8c7d46bfc3dcd7a50c24313f1a272150df709c5ef0da7ed110d9159" Jan 31 04:08:46 crc kubenswrapper[4667]: E0131 04:08:46.933240 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43f2c08ab8c7d46bfc3dcd7a50c24313f1a272150df709c5ef0da7ed110d9159\": container with ID starting with 43f2c08ab8c7d46bfc3dcd7a50c24313f1a272150df709c5ef0da7ed110d9159 not found: ID does not exist" containerID="43f2c08ab8c7d46bfc3dcd7a50c24313f1a272150df709c5ef0da7ed110d9159" Jan 31 04:08:46 crc kubenswrapper[4667]: I0131 04:08:46.933426 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43f2c08ab8c7d46bfc3dcd7a50c24313f1a272150df709c5ef0da7ed110d9159"} err="failed to get container status \"43f2c08ab8c7d46bfc3dcd7a50c24313f1a272150df709c5ef0da7ed110d9159\": rpc error: code = NotFound desc = could not find container \"43f2c08ab8c7d46bfc3dcd7a50c24313f1a272150df709c5ef0da7ed110d9159\": container with ID starting with 43f2c08ab8c7d46bfc3dcd7a50c24313f1a272150df709c5ef0da7ed110d9159 not found: ID does not exist" Jan 31 04:08:46 crc kubenswrapper[4667]: I0131 04:08:46.948182 4667 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55398def-7876-49e4-9509-29374a5f9321-logs\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:46 crc kubenswrapper[4667]: I0131 04:08:46.948233 4667 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55398def-7876-49e4-9509-29374a5f9321-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:46 crc kubenswrapper[4667]: I0131 04:08:46.948248 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsd52\" (UniqueName: \"kubernetes.io/projected/55398def-7876-49e4-9509-29374a5f9321-kube-api-access-bsd52\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:46 crc kubenswrapper[4667]: I0131 04:08:46.964812 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55398def-7876-49e4-9509-29374a5f9321-config-data" (OuterVolumeSpecName: "config-data") pod "55398def-7876-49e4-9509-29374a5f9321" (UID: "55398def-7876-49e4-9509-29374a5f9321"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:46 crc kubenswrapper[4667]: I0131 04:08:46.966182 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55398def-7876-49e4-9509-29374a5f9321-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55398def-7876-49e4-9509-29374a5f9321" (UID: "55398def-7876-49e4-9509-29374a5f9321"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:47 crc kubenswrapper[4667]: I0131 04:08:47.050404 4667 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55398def-7876-49e4-9509-29374a5f9321-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:47 crc kubenswrapper[4667]: I0131 04:08:47.050449 4667 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55398def-7876-49e4-9509-29374a5f9321-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:47 crc kubenswrapper[4667]: I0131 04:08:47.060269 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55398def-7876-49e4-9509-29374a5f9321-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "55398def-7876-49e4-9509-29374a5f9321" (UID: "55398def-7876-49e4-9509-29374a5f9321"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:47 crc kubenswrapper[4667]: I0131 04:08:47.090931 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55398def-7876-49e4-9509-29374a5f9321-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "55398def-7876-49e4-9509-29374a5f9321" (UID: "55398def-7876-49e4-9509-29374a5f9321"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:47 crc kubenswrapper[4667]: I0131 04:08:47.152821 4667 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55398def-7876-49e4-9509-29374a5f9321-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:47 crc kubenswrapper[4667]: I0131 04:08:47.152879 4667 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55398def-7876-49e4-9509-29374a5f9321-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:47 crc kubenswrapper[4667]: I0131 04:08:47.419194 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7f96457d78-wrrfr"] Jan 31 04:08:47 crc kubenswrapper[4667]: I0131 04:08:47.427579 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7f96457d78-wrrfr"] Jan 31 04:08:47 crc kubenswrapper[4667]: I0131 04:08:47.792122 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e49e208-fc35-469b-a53d-3c8b392a6bc7","Type":"ContainerStarted","Data":"b41188e39946c0f57b973291db1df1bb50eb58ae5fe1231e2163c4925dce5198"} Jan 31 04:08:48 crc kubenswrapper[4667]: I0131 04:08:48.805413 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e49e208-fc35-469b-a53d-3c8b392a6bc7","Type":"ContainerStarted","Data":"d76a666a12fd5ab4c663aaa18d763b14822e2b424323ae2a8c468ea12d88ca85"} Jan 31 04:08:49 crc kubenswrapper[4667]: I0131 04:08:49.190058 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 04:08:49 crc kubenswrapper[4667]: I0131 04:08:49.190352 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="75c7336f-29b1-4a8a-88c1-69eec14a92b7" containerName="glance-log" containerID="cri-o://6580942cad36b126b755e3abd72d1cf44bec2102fa46a2dfdefa48d3b287cb12" gracePeriod=30 Jan 31 04:08:49 crc kubenswrapper[4667]: I0131 04:08:49.190877 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="75c7336f-29b1-4a8a-88c1-69eec14a92b7" containerName="glance-httpd" containerID="cri-o://dcf86acf1b087b1327e20c15a2391601c9392b1f310f6ca59b658c3320521994" gracePeriod=30 Jan 31 04:08:49 crc kubenswrapper[4667]: I0131 04:08:49.367046 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55398def-7876-49e4-9509-29374a5f9321" path="/var/lib/kubelet/pods/55398def-7876-49e4-9509-29374a5f9321/volumes" Jan 31 04:08:49 crc kubenswrapper[4667]: I0131 04:08:49.818465 4667 generic.go:334] "Generic (PLEG): container finished" podID="75c7336f-29b1-4a8a-88c1-69eec14a92b7" containerID="6580942cad36b126b755e3abd72d1cf44bec2102fa46a2dfdefa48d3b287cb12" exitCode=143 Jan 31 04:08:49 crc kubenswrapper[4667]: I0131 04:08:49.818519 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"75c7336f-29b1-4a8a-88c1-69eec14a92b7","Type":"ContainerDied","Data":"6580942cad36b126b755e3abd72d1cf44bec2102fa46a2dfdefa48d3b287cb12"} Jan 31 04:08:50 crc kubenswrapper[4667]: I0131 04:08:50.310120 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="10513551-238c-4a99-83c9-2992fb1bbaae" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.174:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 04:08:50 crc kubenswrapper[4667]: I0131 04:08:50.645979 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 31 04:08:50 crc kubenswrapper[4667]: I0131 04:08:50.889853 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e49e208-fc35-469b-a53d-3c8b392a6bc7","Type":"ContainerStarted","Data":"a28a68de9577b7425cd1da116336630113e3c1d52663f4493b7570ed94162c06"} Jan 31 04:08:50 crc kubenswrapper[4667]: I0131 04:08:50.890104 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8e49e208-fc35-469b-a53d-3c8b392a6bc7" containerName="ceilometer-central-agent" containerID="cri-o://7e4959f43922ea44045e67b50343a5ffd986dcfde1ed6571be2a9b98b14e3896" gracePeriod=30 Jan 31 04:08:50 crc kubenswrapper[4667]: I0131 04:08:50.890492 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 04:08:50 crc kubenswrapper[4667]: I0131 04:08:50.890824 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8e49e208-fc35-469b-a53d-3c8b392a6bc7" containerName="proxy-httpd" containerID="cri-o://a28a68de9577b7425cd1da116336630113e3c1d52663f4493b7570ed94162c06" gracePeriod=30 Jan 31 04:08:50 crc kubenswrapper[4667]: I0131 04:08:50.890906 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8e49e208-fc35-469b-a53d-3c8b392a6bc7" containerName="sg-core" containerID="cri-o://d76a666a12fd5ab4c663aaa18d763b14822e2b424323ae2a8c468ea12d88ca85" gracePeriod=30 Jan 31 04:08:50 crc kubenswrapper[4667]: I0131 04:08:50.890953 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8e49e208-fc35-469b-a53d-3c8b392a6bc7" containerName="ceilometer-notification-agent" containerID="cri-o://b41188e39946c0f57b973291db1df1bb50eb58ae5fe1231e2163c4925dce5198" gracePeriod=30 Jan 31 04:08:50 crc kubenswrapper[4667]: I0131 04:08:50.929138 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.388627459 podStartE2EDuration="7.929109812s" podCreationTimestamp="2026-01-31 04:08:43 +0000 UTC" firstStartedPulling="2026-01-31 04:08:44.779032957 +0000 UTC m=+1248.295368246" lastFinishedPulling="2026-01-31 04:08:50.3195153 +0000 UTC m=+1253.835850599" observedRunningTime="2026-01-31 04:08:50.925384133 +0000 UTC m=+1254.441719442" watchObservedRunningTime="2026-01-31 04:08:50.929109812 +0000 UTC m=+1254.445445111" Jan 31 04:08:51 crc kubenswrapper[4667]: I0131 04:08:51.758065 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-78789d8f44-5trmc" podUID="b7f8fd18-06a0-432e-8c17-c9b432b6ca69" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 31 04:08:51 crc kubenswrapper[4667]: I0131 04:08:51.845646 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-86c748c4d6-2grmh" podUID="c6974567-3bea-447a-bb8b-ced22b6d34ce" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Jan 31 04:08:51 crc kubenswrapper[4667]: I0131 04:08:51.903284 4667 generic.go:334] "Generic (PLEG): container finished" podID="8e49e208-fc35-469b-a53d-3c8b392a6bc7" containerID="a28a68de9577b7425cd1da116336630113e3c1d52663f4493b7570ed94162c06" exitCode=0 Jan 31 04:08:51 crc kubenswrapper[4667]: I0131 04:08:51.903326 4667 generic.go:334] "Generic (PLEG): container finished" podID="8e49e208-fc35-469b-a53d-3c8b392a6bc7" containerID="d76a666a12fd5ab4c663aaa18d763b14822e2b424323ae2a8c468ea12d88ca85" exitCode=2 Jan 31 04:08:51 crc kubenswrapper[4667]: I0131 04:08:51.903336 4667 generic.go:334] "Generic (PLEG): container finished" podID="8e49e208-fc35-469b-a53d-3c8b392a6bc7" containerID="b41188e39946c0f57b973291db1df1bb50eb58ae5fe1231e2163c4925dce5198" exitCode=0 Jan 31 04:08:51 crc kubenswrapper[4667]: I0131 04:08:51.903385 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e49e208-fc35-469b-a53d-3c8b392a6bc7","Type":"ContainerDied","Data":"a28a68de9577b7425cd1da116336630113e3c1d52663f4493b7570ed94162c06"} Jan 31 04:08:51 crc kubenswrapper[4667]: I0131 04:08:51.903471 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e49e208-fc35-469b-a53d-3c8b392a6bc7","Type":"ContainerDied","Data":"d76a666a12fd5ab4c663aaa18d763b14822e2b424323ae2a8c468ea12d88ca85"} Jan 31 04:08:51 crc kubenswrapper[4667]: I0131 04:08:51.903489 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e49e208-fc35-469b-a53d-3c8b392a6bc7","Type":"ContainerDied","Data":"b41188e39946c0f57b973291db1df1bb50eb58ae5fe1231e2163c4925dce5198"} Jan 31 04:08:53 crc kubenswrapper[4667]: I0131 04:08:53.534805 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 04:08:53 crc kubenswrapper[4667]: I0131 04:08:53.645596 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75c7336f-29b1-4a8a-88c1-69eec14a92b7-httpd-run\") pod \"75c7336f-29b1-4a8a-88c1-69eec14a92b7\" (UID: \"75c7336f-29b1-4a8a-88c1-69eec14a92b7\") " Jan 31 04:08:53 crc kubenswrapper[4667]: I0131 04:08:53.646120 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9x79h\" (UniqueName: \"kubernetes.io/projected/75c7336f-29b1-4a8a-88c1-69eec14a92b7-kube-api-access-9x79h\") pod \"75c7336f-29b1-4a8a-88c1-69eec14a92b7\" (UID: \"75c7336f-29b1-4a8a-88c1-69eec14a92b7\") " Jan 31 04:08:53 crc kubenswrapper[4667]: I0131 04:08:53.647521 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75c7336f-29b1-4a8a-88c1-69eec14a92b7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "75c7336f-29b1-4a8a-88c1-69eec14a92b7" (UID: "75c7336f-29b1-4a8a-88c1-69eec14a92b7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:08:53 crc kubenswrapper[4667]: I0131 04:08:53.647569 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"75c7336f-29b1-4a8a-88c1-69eec14a92b7\" (UID: \"75c7336f-29b1-4a8a-88c1-69eec14a92b7\") " Jan 31 04:08:53 crc kubenswrapper[4667]: I0131 04:08:53.647645 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75c7336f-29b1-4a8a-88c1-69eec14a92b7-logs\") pod \"75c7336f-29b1-4a8a-88c1-69eec14a92b7\" (UID: \"75c7336f-29b1-4a8a-88c1-69eec14a92b7\") " Jan 31 04:08:53 crc kubenswrapper[4667]: I0131 04:08:53.647706 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75c7336f-29b1-4a8a-88c1-69eec14a92b7-config-data\") pod \"75c7336f-29b1-4a8a-88c1-69eec14a92b7\" (UID: \"75c7336f-29b1-4a8a-88c1-69eec14a92b7\") " Jan 31 04:08:53 crc kubenswrapper[4667]: I0131 04:08:53.647768 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75c7336f-29b1-4a8a-88c1-69eec14a92b7-public-tls-certs\") pod \"75c7336f-29b1-4a8a-88c1-69eec14a92b7\" (UID: \"75c7336f-29b1-4a8a-88c1-69eec14a92b7\") " Jan 31 04:08:53 crc kubenswrapper[4667]: I0131 04:08:53.647792 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75c7336f-29b1-4a8a-88c1-69eec14a92b7-combined-ca-bundle\") pod \"75c7336f-29b1-4a8a-88c1-69eec14a92b7\" (UID: \"75c7336f-29b1-4a8a-88c1-69eec14a92b7\") " Jan 31 04:08:53 crc kubenswrapper[4667]: I0131 04:08:53.647873 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75c7336f-29b1-4a8a-88c1-69eec14a92b7-scripts\") pod \"75c7336f-29b1-4a8a-88c1-69eec14a92b7\" (UID: \"75c7336f-29b1-4a8a-88c1-69eec14a92b7\") " Jan 31 04:08:53 crc kubenswrapper[4667]: I0131 04:08:53.648446 4667 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75c7336f-29b1-4a8a-88c1-69eec14a92b7-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:53 crc kubenswrapper[4667]: I0131 04:08:53.655852 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75c7336f-29b1-4a8a-88c1-69eec14a92b7-scripts" (OuterVolumeSpecName: "scripts") pod "75c7336f-29b1-4a8a-88c1-69eec14a92b7" (UID: "75c7336f-29b1-4a8a-88c1-69eec14a92b7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:53 crc kubenswrapper[4667]: I0131 04:08:53.657949 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75c7336f-29b1-4a8a-88c1-69eec14a92b7-logs" (OuterVolumeSpecName: "logs") pod "75c7336f-29b1-4a8a-88c1-69eec14a92b7" (UID: "75c7336f-29b1-4a8a-88c1-69eec14a92b7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:08:53 crc kubenswrapper[4667]: I0131 04:08:53.669317 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "75c7336f-29b1-4a8a-88c1-69eec14a92b7" (UID: "75c7336f-29b1-4a8a-88c1-69eec14a92b7"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 04:08:53 crc kubenswrapper[4667]: I0131 04:08:53.670032 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75c7336f-29b1-4a8a-88c1-69eec14a92b7-kube-api-access-9x79h" (OuterVolumeSpecName: "kube-api-access-9x79h") pod "75c7336f-29b1-4a8a-88c1-69eec14a92b7" (UID: "75c7336f-29b1-4a8a-88c1-69eec14a92b7"). InnerVolumeSpecName "kube-api-access-9x79h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:08:53 crc kubenswrapper[4667]: I0131 04:08:53.762544 4667 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75c7336f-29b1-4a8a-88c1-69eec14a92b7-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:53 crc kubenswrapper[4667]: I0131 04:08:53.762991 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9x79h\" (UniqueName: \"kubernetes.io/projected/75c7336f-29b1-4a8a-88c1-69eec14a92b7-kube-api-access-9x79h\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:53 crc kubenswrapper[4667]: I0131 04:08:53.763081 4667 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 31 04:08:53 crc kubenswrapper[4667]: I0131 04:08:53.763147 4667 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75c7336f-29b1-4a8a-88c1-69eec14a92b7-logs\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:53 crc kubenswrapper[4667]: I0131 04:08:53.798980 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75c7336f-29b1-4a8a-88c1-69eec14a92b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75c7336f-29b1-4a8a-88c1-69eec14a92b7" (UID: "75c7336f-29b1-4a8a-88c1-69eec14a92b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:53 crc kubenswrapper[4667]: I0131 04:08:53.814274 4667 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 31 04:08:53 crc kubenswrapper[4667]: I0131 04:08:53.837996 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75c7336f-29b1-4a8a-88c1-69eec14a92b7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "75c7336f-29b1-4a8a-88c1-69eec14a92b7" (UID: "75c7336f-29b1-4a8a-88c1-69eec14a92b7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:53 crc kubenswrapper[4667]: I0131 04:08:53.843541 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75c7336f-29b1-4a8a-88c1-69eec14a92b7-config-data" (OuterVolumeSpecName: "config-data") pod "75c7336f-29b1-4a8a-88c1-69eec14a92b7" (UID: "75c7336f-29b1-4a8a-88c1-69eec14a92b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:53 crc kubenswrapper[4667]: I0131 04:08:53.865710 4667 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:53 crc kubenswrapper[4667]: I0131 04:08:53.865749 4667 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75c7336f-29b1-4a8a-88c1-69eec14a92b7-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:53 crc kubenswrapper[4667]: I0131 04:08:53.865765 4667 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75c7336f-29b1-4a8a-88c1-69eec14a92b7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:53 crc kubenswrapper[4667]: I0131 04:08:53.865776 4667 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75c7336f-29b1-4a8a-88c1-69eec14a92b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:53 crc kubenswrapper[4667]: I0131 04:08:53.929302 4667 generic.go:334] "Generic (PLEG): container finished" podID="75c7336f-29b1-4a8a-88c1-69eec14a92b7" containerID="dcf86acf1b087b1327e20c15a2391601c9392b1f310f6ca59b658c3320521994" exitCode=0 Jan 31 04:08:53 crc kubenswrapper[4667]: I0131 04:08:53.929353 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"75c7336f-29b1-4a8a-88c1-69eec14a92b7","Type":"ContainerDied","Data":"dcf86acf1b087b1327e20c15a2391601c9392b1f310f6ca59b658c3320521994"} Jan 31 04:08:53 crc kubenswrapper[4667]: I0131 04:08:53.929383 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"75c7336f-29b1-4a8a-88c1-69eec14a92b7","Type":"ContainerDied","Data":"73ec87c4c7504f6ec9d3cf013a4540adbf843b70a0166fbfbab419c322c0ac92"} Jan 31 04:08:53 crc kubenswrapper[4667]: I0131 04:08:53.929401 4667 scope.go:117] "RemoveContainer" containerID="dcf86acf1b087b1327e20c15a2391601c9392b1f310f6ca59b658c3320521994" Jan 31 04:08:53 crc kubenswrapper[4667]: I0131 04:08:53.929553 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 04:08:53 crc kubenswrapper[4667]: I0131 04:08:53.979883 4667 scope.go:117] "RemoveContainer" containerID="6580942cad36b126b755e3abd72d1cf44bec2102fa46a2dfdefa48d3b287cb12" Jan 31 04:08:53 crc kubenswrapper[4667]: I0131 04:08:53.992915 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.007526 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.021062 4667 scope.go:117] "RemoveContainer" containerID="dcf86acf1b087b1327e20c15a2391601c9392b1f310f6ca59b658c3320521994" Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.021596 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 04:08:54 crc kubenswrapper[4667]: E0131 04:08:54.022051 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55398def-7876-49e4-9509-29374a5f9321" containerName="placement-log" Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.022070 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="55398def-7876-49e4-9509-29374a5f9321" containerName="placement-log" Jan 31 04:08:54 crc kubenswrapper[4667]: E0131 04:08:54.022090 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55398def-7876-49e4-9509-29374a5f9321" containerName="placement-api" Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.022098 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="55398def-7876-49e4-9509-29374a5f9321" containerName="placement-api" Jan 31 04:08:54 crc kubenswrapper[4667]: E0131 04:08:54.022105 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75c7336f-29b1-4a8a-88c1-69eec14a92b7" containerName="glance-httpd" Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.022111 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="75c7336f-29b1-4a8a-88c1-69eec14a92b7" containerName="glance-httpd" Jan 31 04:08:54 crc kubenswrapper[4667]: E0131 04:08:54.022140 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75c7336f-29b1-4a8a-88c1-69eec14a92b7" containerName="glance-log" Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.022146 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="75c7336f-29b1-4a8a-88c1-69eec14a92b7" containerName="glance-log" Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.022314 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="55398def-7876-49e4-9509-29374a5f9321" containerName="placement-log" Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.022346 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="75c7336f-29b1-4a8a-88c1-69eec14a92b7" containerName="glance-httpd" Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.022363 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="55398def-7876-49e4-9509-29374a5f9321" containerName="placement-api" Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.022382 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="75c7336f-29b1-4a8a-88c1-69eec14a92b7" containerName="glance-log" Jan 31 04:08:54 crc kubenswrapper[4667]: E0131 04:08:54.022322 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcf86acf1b087b1327e20c15a2391601c9392b1f310f6ca59b658c3320521994\": container with ID starting with dcf86acf1b087b1327e20c15a2391601c9392b1f310f6ca59b658c3320521994 not found: ID does not exist" containerID="dcf86acf1b087b1327e20c15a2391601c9392b1f310f6ca59b658c3320521994" Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.024065 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcf86acf1b087b1327e20c15a2391601c9392b1f310f6ca59b658c3320521994"} err="failed to get container status \"dcf86acf1b087b1327e20c15a2391601c9392b1f310f6ca59b658c3320521994\": rpc error: code = NotFound desc = could not find container \"dcf86acf1b087b1327e20c15a2391601c9392b1f310f6ca59b658c3320521994\": container with ID starting with dcf86acf1b087b1327e20c15a2391601c9392b1f310f6ca59b658c3320521994 not found: ID does not exist" Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.024101 4667 scope.go:117] "RemoveContainer" containerID="6580942cad36b126b755e3abd72d1cf44bec2102fa46a2dfdefa48d3b287cb12" Jan 31 04:08:54 crc kubenswrapper[4667]: E0131 04:08:54.025873 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6580942cad36b126b755e3abd72d1cf44bec2102fa46a2dfdefa48d3b287cb12\": container with ID starting with 6580942cad36b126b755e3abd72d1cf44bec2102fa46a2dfdefa48d3b287cb12 not found: ID does not exist" containerID="6580942cad36b126b755e3abd72d1cf44bec2102fa46a2dfdefa48d3b287cb12" Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.025903 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6580942cad36b126b755e3abd72d1cf44bec2102fa46a2dfdefa48d3b287cb12"} err="failed to get container status \"6580942cad36b126b755e3abd72d1cf44bec2102fa46a2dfdefa48d3b287cb12\": rpc error: code = NotFound desc = could not find container \"6580942cad36b126b755e3abd72d1cf44bec2102fa46a2dfdefa48d3b287cb12\": container with ID starting with 6580942cad36b126b755e3abd72d1cf44bec2102fa46a2dfdefa48d3b287cb12 not found: ID does not exist" Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.026586 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.028871 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.031154 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.056711 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.172070 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2e0a40d-c35f-443a-97b3-0150c13d56e4-config-data\") pod \"glance-default-external-api-0\" (UID: \"e2e0a40d-c35f-443a-97b3-0150c13d56e4\") " pod="openstack/glance-default-external-api-0" Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.172139 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"e2e0a40d-c35f-443a-97b3-0150c13d56e4\") " pod="openstack/glance-default-external-api-0" Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.172166 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2e0a40d-c35f-443a-97b3-0150c13d56e4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e2e0a40d-c35f-443a-97b3-0150c13d56e4\") " pod="openstack/glance-default-external-api-0" Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.172192 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2e0a40d-c35f-443a-97b3-0150c13d56e4-scripts\") pod \"glance-default-external-api-0\" (UID: \"e2e0a40d-c35f-443a-97b3-0150c13d56e4\") " pod="openstack/glance-default-external-api-0" Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.172226 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmrpd\" (UniqueName: \"kubernetes.io/projected/e2e0a40d-c35f-443a-97b3-0150c13d56e4-kube-api-access-wmrpd\") pod \"glance-default-external-api-0\" (UID: \"e2e0a40d-c35f-443a-97b3-0150c13d56e4\") " pod="openstack/glance-default-external-api-0" Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.172255 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2e0a40d-c35f-443a-97b3-0150c13d56e4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e2e0a40d-c35f-443a-97b3-0150c13d56e4\") " pod="openstack/glance-default-external-api-0" Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.172301 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2e0a40d-c35f-443a-97b3-0150c13d56e4-logs\") pod \"glance-default-external-api-0\" (UID: \"e2e0a40d-c35f-443a-97b3-0150c13d56e4\") " pod="openstack/glance-default-external-api-0" Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.172348 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e2e0a40d-c35f-443a-97b3-0150c13d56e4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e2e0a40d-c35f-443a-97b3-0150c13d56e4\") " pod="openstack/glance-default-external-api-0" Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.274533 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2e0a40d-c35f-443a-97b3-0150c13d56e4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e2e0a40d-c35f-443a-97b3-0150c13d56e4\") " pod="openstack/glance-default-external-api-0" Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.274621 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2e0a40d-c35f-443a-97b3-0150c13d56e4-logs\") pod \"glance-default-external-api-0\" (UID: \"e2e0a40d-c35f-443a-97b3-0150c13d56e4\") " pod="openstack/glance-default-external-api-0" Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.274688 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e2e0a40d-c35f-443a-97b3-0150c13d56e4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e2e0a40d-c35f-443a-97b3-0150c13d56e4\") " pod="openstack/glance-default-external-api-0" Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.274737 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2e0a40d-c35f-443a-97b3-0150c13d56e4-config-data\") pod \"glance-default-external-api-0\" (UID: \"e2e0a40d-c35f-443a-97b3-0150c13d56e4\") " pod="openstack/glance-default-external-api-0" Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.274830 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"e2e0a40d-c35f-443a-97b3-0150c13d56e4\") " pod="openstack/glance-default-external-api-0" Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.274881 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2e0a40d-c35f-443a-97b3-0150c13d56e4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e2e0a40d-c35f-443a-97b3-0150c13d56e4\") " pod="openstack/glance-default-external-api-0" Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.274916 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2e0a40d-c35f-443a-97b3-0150c13d56e4-scripts\") pod \"glance-default-external-api-0\" (UID: \"e2e0a40d-c35f-443a-97b3-0150c13d56e4\") " pod="openstack/glance-default-external-api-0" Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.274962 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmrpd\" (UniqueName: \"kubernetes.io/projected/e2e0a40d-c35f-443a-97b3-0150c13d56e4-kube-api-access-wmrpd\") pod \"glance-default-external-api-0\" (UID: \"e2e0a40d-c35f-443a-97b3-0150c13d56e4\") " pod="openstack/glance-default-external-api-0" Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.275176 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2e0a40d-c35f-443a-97b3-0150c13d56e4-logs\") pod \"glance-default-external-api-0\" (UID: \"e2e0a40d-c35f-443a-97b3-0150c13d56e4\") " pod="openstack/glance-default-external-api-0" Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.275265 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e2e0a40d-c35f-443a-97b3-0150c13d56e4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e2e0a40d-c35f-443a-97b3-0150c13d56e4\") " pod="openstack/glance-default-external-api-0" Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.275283 4667 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"e2e0a40d-c35f-443a-97b3-0150c13d56e4\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.282143 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2e0a40d-c35f-443a-97b3-0150c13d56e4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e2e0a40d-c35f-443a-97b3-0150c13d56e4\") " pod="openstack/glance-default-external-api-0" Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.289193 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2e0a40d-c35f-443a-97b3-0150c13d56e4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e2e0a40d-c35f-443a-97b3-0150c13d56e4\") " pod="openstack/glance-default-external-api-0" Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.289644 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2e0a40d-c35f-443a-97b3-0150c13d56e4-scripts\") pod \"glance-default-external-api-0\" (UID: \"e2e0a40d-c35f-443a-97b3-0150c13d56e4\") " pod="openstack/glance-default-external-api-0" Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.306725 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmrpd\" (UniqueName: \"kubernetes.io/projected/e2e0a40d-c35f-443a-97b3-0150c13d56e4-kube-api-access-wmrpd\") pod \"glance-default-external-api-0\" (UID: \"e2e0a40d-c35f-443a-97b3-0150c13d56e4\") " pod="openstack/glance-default-external-api-0" Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.329256 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2e0a40d-c35f-443a-97b3-0150c13d56e4-config-data\") pod \"glance-default-external-api-0\" (UID: \"e2e0a40d-c35f-443a-97b3-0150c13d56e4\") " pod="openstack/glance-default-external-api-0" Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.390271 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"e2e0a40d-c35f-443a-97b3-0150c13d56e4\") " pod="openstack/glance-default-external-api-0" Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.643265 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.787596 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.788345 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4128ea2d-f529-4224-a008-560c8920dc8f" containerName="glance-log" containerID="cri-o://7e38a694cb59c8a8c538783300ba6051eef6525a53c82c8b59aa44c4a104d40b" gracePeriod=30 Jan 31 04:08:54 crc kubenswrapper[4667]: I0131 04:08:54.789126 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4128ea2d-f529-4224-a008-560c8920dc8f" containerName="glance-httpd" containerID="cri-o://b7ba3b069b9a1e0ee79306c615d0a44bc6b2c2e42cc95b001530e969cd870866" gracePeriod=30 Jan 31 04:08:55 crc kubenswrapper[4667]: I0131 04:08:55.020779 4667 generic.go:334] "Generic (PLEG): container finished" podID="4128ea2d-f529-4224-a008-560c8920dc8f" containerID="7e38a694cb59c8a8c538783300ba6051eef6525a53c82c8b59aa44c4a104d40b" exitCode=143 Jan 31 04:08:55 crc kubenswrapper[4667]: I0131 04:08:55.021034 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4128ea2d-f529-4224-a008-560c8920dc8f","Type":"ContainerDied","Data":"7e38a694cb59c8a8c538783300ba6051eef6525a53c82c8b59aa44c4a104d40b"} Jan 31 04:08:55 crc kubenswrapper[4667]: I0131 04:08:55.307821 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75c7336f-29b1-4a8a-88c1-69eec14a92b7" path="/var/lib/kubelet/pods/75c7336f-29b1-4a8a-88c1-69eec14a92b7/volumes" Jan 31 04:08:55 crc kubenswrapper[4667]: I0131 04:08:55.443459 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 31 04:08:56 crc kubenswrapper[4667]: I0131 04:08:56.065042 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e2e0a40d-c35f-443a-97b3-0150c13d56e4","Type":"ContainerStarted","Data":"65f0b6d243e637880617d8df06eb0bd2fa986af9c34ed583e9c545deb2d8acc9"} Jan 31 04:08:57 crc kubenswrapper[4667]: I0131 04:08:57.083737 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e2e0a40d-c35f-443a-97b3-0150c13d56e4","Type":"ContainerStarted","Data":"0a69adde675a101daa0a4d5ee22b58dd1dc06e852f3fa88600a527a372675640"} Jan 31 04:08:58 crc kubenswrapper[4667]: I0131 04:08:58.133851 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e2e0a40d-c35f-443a-97b3-0150c13d56e4","Type":"ContainerStarted","Data":"e13329f9d4824c6ffa8816f0fd1a82bac7daf517c566f80b50c54a04a188295e"} Jan 31 04:08:58 crc kubenswrapper[4667]: I0131 04:08:58.200981 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.199811425 podStartE2EDuration="5.199811425s" podCreationTimestamp="2026-01-31 04:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:08:58.17664082 +0000 UTC m=+1261.692976119" watchObservedRunningTime="2026-01-31 04:08:58.199811425 +0000 UTC m=+1261.716146724" Jan 31 04:08:58 crc kubenswrapper[4667]: I0131 04:08:58.732552 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 04:08:58 crc kubenswrapper[4667]: I0131 04:08:58.898413 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"4128ea2d-f529-4224-a008-560c8920dc8f\" (UID: \"4128ea2d-f529-4224-a008-560c8920dc8f\") " Jan 31 04:08:58 crc kubenswrapper[4667]: I0131 04:08:58.898502 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5x6jm\" (UniqueName: \"kubernetes.io/projected/4128ea2d-f529-4224-a008-560c8920dc8f-kube-api-access-5x6jm\") pod \"4128ea2d-f529-4224-a008-560c8920dc8f\" (UID: \"4128ea2d-f529-4224-a008-560c8920dc8f\") " Jan 31 04:08:58 crc kubenswrapper[4667]: I0131 04:08:58.898565 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4128ea2d-f529-4224-a008-560c8920dc8f-combined-ca-bundle\") pod \"4128ea2d-f529-4224-a008-560c8920dc8f\" (UID: \"4128ea2d-f529-4224-a008-560c8920dc8f\") " Jan 31 04:08:58 crc kubenswrapper[4667]: I0131 04:08:58.898652 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4128ea2d-f529-4224-a008-560c8920dc8f-httpd-run\") pod \"4128ea2d-f529-4224-a008-560c8920dc8f\" (UID: \"4128ea2d-f529-4224-a008-560c8920dc8f\") " Jan 31 04:08:58 crc kubenswrapper[4667]: I0131 04:08:58.898753 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4128ea2d-f529-4224-a008-560c8920dc8f-logs\") pod \"4128ea2d-f529-4224-a008-560c8920dc8f\" (UID: \"4128ea2d-f529-4224-a008-560c8920dc8f\") " Jan 31 04:08:58 crc kubenswrapper[4667]: I0131 04:08:58.898788 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4128ea2d-f529-4224-a008-560c8920dc8f-internal-tls-certs\") pod \"4128ea2d-f529-4224-a008-560c8920dc8f\" (UID: \"4128ea2d-f529-4224-a008-560c8920dc8f\") " Jan 31 04:08:58 crc kubenswrapper[4667]: I0131 04:08:58.898820 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4128ea2d-f529-4224-a008-560c8920dc8f-config-data\") pod \"4128ea2d-f529-4224-a008-560c8920dc8f\" (UID: \"4128ea2d-f529-4224-a008-560c8920dc8f\") " Jan 31 04:08:58 crc kubenswrapper[4667]: I0131 04:08:58.898858 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4128ea2d-f529-4224-a008-560c8920dc8f-scripts\") pod \"4128ea2d-f529-4224-a008-560c8920dc8f\" (UID: \"4128ea2d-f529-4224-a008-560c8920dc8f\") " Jan 31 04:08:58 crc kubenswrapper[4667]: I0131 04:08:58.900314 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4128ea2d-f529-4224-a008-560c8920dc8f-logs" (OuterVolumeSpecName: "logs") pod "4128ea2d-f529-4224-a008-560c8920dc8f" (UID: "4128ea2d-f529-4224-a008-560c8920dc8f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:08:58 crc kubenswrapper[4667]: I0131 04:08:58.900320 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4128ea2d-f529-4224-a008-560c8920dc8f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4128ea2d-f529-4224-a008-560c8920dc8f" (UID: "4128ea2d-f529-4224-a008-560c8920dc8f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:08:58 crc kubenswrapper[4667]: I0131 04:08:58.919212 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4128ea2d-f529-4224-a008-560c8920dc8f-kube-api-access-5x6jm" (OuterVolumeSpecName: "kube-api-access-5x6jm") pod "4128ea2d-f529-4224-a008-560c8920dc8f" (UID: "4128ea2d-f529-4224-a008-560c8920dc8f"). InnerVolumeSpecName "kube-api-access-5x6jm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:08:58 crc kubenswrapper[4667]: I0131 04:08:58.919253 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4128ea2d-f529-4224-a008-560c8920dc8f-scripts" (OuterVolumeSpecName: "scripts") pod "4128ea2d-f529-4224-a008-560c8920dc8f" (UID: "4128ea2d-f529-4224-a008-560c8920dc8f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:58 crc kubenswrapper[4667]: I0131 04:08:58.964217 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "4128ea2d-f529-4224-a008-560c8920dc8f" (UID: "4128ea2d-f529-4224-a008-560c8920dc8f"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 04:08:58 crc kubenswrapper[4667]: I0131 04:08:58.998659 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4128ea2d-f529-4224-a008-560c8920dc8f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4128ea2d-f529-4224-a008-560c8920dc8f" (UID: "4128ea2d-f529-4224-a008-560c8920dc8f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.003549 4667 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4128ea2d-f529-4224-a008-560c8920dc8f-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.003590 4667 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4128ea2d-f529-4224-a008-560c8920dc8f-logs\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.003600 4667 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4128ea2d-f529-4224-a008-560c8920dc8f-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.003635 4667 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.003647 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5x6jm\" (UniqueName: \"kubernetes.io/projected/4128ea2d-f529-4224-a008-560c8920dc8f-kube-api-access-5x6jm\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.003658 4667 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4128ea2d-f529-4224-a008-560c8920dc8f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.022965 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4128ea2d-f529-4224-a008-560c8920dc8f-config-data" (OuterVolumeSpecName: "config-data") pod "4128ea2d-f529-4224-a008-560c8920dc8f" (UID: "4128ea2d-f529-4224-a008-560c8920dc8f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.031043 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4128ea2d-f529-4224-a008-560c8920dc8f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4128ea2d-f529-4224-a008-560c8920dc8f" (UID: "4128ea2d-f529-4224-a008-560c8920dc8f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.036703 4667 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.106051 4667 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.106087 4667 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4128ea2d-f529-4224-a008-560c8920dc8f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.106099 4667 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4128ea2d-f529-4224-a008-560c8920dc8f-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.143997 4667 generic.go:334] "Generic (PLEG): container finished" podID="4128ea2d-f529-4224-a008-560c8920dc8f" containerID="b7ba3b069b9a1e0ee79306c615d0a44bc6b2c2e42cc95b001530e969cd870866" exitCode=0 Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.145069 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.148166 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4128ea2d-f529-4224-a008-560c8920dc8f","Type":"ContainerDied","Data":"b7ba3b069b9a1e0ee79306c615d0a44bc6b2c2e42cc95b001530e969cd870866"} Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.148231 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4128ea2d-f529-4224-a008-560c8920dc8f","Type":"ContainerDied","Data":"1fdcfe3dafd53602d2a7af5ece97f1454b7a5bd73f01624bd6aeb6bb8df9b5ec"} Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.148257 4667 scope.go:117] "RemoveContainer" containerID="b7ba3b069b9a1e0ee79306c615d0a44bc6b2c2e42cc95b001530e969cd870866" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.206817 4667 scope.go:117] "RemoveContainer" containerID="7e38a694cb59c8a8c538783300ba6051eef6525a53c82c8b59aa44c4a104d40b" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.227703 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.254055 4667 scope.go:117] "RemoveContainer" containerID="b7ba3b069b9a1e0ee79306c615d0a44bc6b2c2e42cc95b001530e969cd870866" Jan 31 04:08:59 crc kubenswrapper[4667]: E0131 04:08:59.254569 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7ba3b069b9a1e0ee79306c615d0a44bc6b2c2e42cc95b001530e969cd870866\": container with ID starting with b7ba3b069b9a1e0ee79306c615d0a44bc6b2c2e42cc95b001530e969cd870866 not found: ID does not exist" containerID="b7ba3b069b9a1e0ee79306c615d0a44bc6b2c2e42cc95b001530e969cd870866" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.254603 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7ba3b069b9a1e0ee79306c615d0a44bc6b2c2e42cc95b001530e969cd870866"} err="failed to get container status \"b7ba3b069b9a1e0ee79306c615d0a44bc6b2c2e42cc95b001530e969cd870866\": rpc error: code = NotFound desc = could not find container \"b7ba3b069b9a1e0ee79306c615d0a44bc6b2c2e42cc95b001530e969cd870866\": container with ID starting with b7ba3b069b9a1e0ee79306c615d0a44bc6b2c2e42cc95b001530e969cd870866 not found: ID does not exist" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.254624 4667 scope.go:117] "RemoveContainer" containerID="7e38a694cb59c8a8c538783300ba6051eef6525a53c82c8b59aa44c4a104d40b" Jan 31 04:08:59 crc kubenswrapper[4667]: E0131 04:08:59.254829 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e38a694cb59c8a8c538783300ba6051eef6525a53c82c8b59aa44c4a104d40b\": container with ID starting with 7e38a694cb59c8a8c538783300ba6051eef6525a53c82c8b59aa44c4a104d40b not found: ID does not exist" containerID="7e38a694cb59c8a8c538783300ba6051eef6525a53c82c8b59aa44c4a104d40b" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.254869 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e38a694cb59c8a8c538783300ba6051eef6525a53c82c8b59aa44c4a104d40b"} err="failed to get container status \"7e38a694cb59c8a8c538783300ba6051eef6525a53c82c8b59aa44c4a104d40b\": rpc error: code = NotFound desc = could not find container \"7e38a694cb59c8a8c538783300ba6051eef6525a53c82c8b59aa44c4a104d40b\": container with ID starting with 7e38a694cb59c8a8c538783300ba6051eef6525a53c82c8b59aa44c4a104d40b not found: ID does not exist" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.254913 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.267239 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 04:08:59 crc kubenswrapper[4667]: E0131 04:08:59.267885 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4128ea2d-f529-4224-a008-560c8920dc8f" containerName="glance-httpd" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.267900 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="4128ea2d-f529-4224-a008-560c8920dc8f" containerName="glance-httpd" Jan 31 04:08:59 crc kubenswrapper[4667]: E0131 04:08:59.267911 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4128ea2d-f529-4224-a008-560c8920dc8f" containerName="glance-log" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.267918 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="4128ea2d-f529-4224-a008-560c8920dc8f" containerName="glance-log" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.268177 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="4128ea2d-f529-4224-a008-560c8920dc8f" containerName="glance-log" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.268198 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="4128ea2d-f529-4224-a008-560c8920dc8f" containerName="glance-httpd" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.274507 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.274717 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.282233 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.282898 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.341238 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4128ea2d-f529-4224-a008-560c8920dc8f" path="/var/lib/kubelet/pods/4128ea2d-f529-4224-a008-560c8920dc8f/volumes" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.414382 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9aae903-8070-44c6-8826-ec0ff7d90139-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b9aae903-8070-44c6-8826-ec0ff7d90139\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.414489 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9aae903-8070-44c6-8826-ec0ff7d90139-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b9aae903-8070-44c6-8826-ec0ff7d90139\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.414554 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9aae903-8070-44c6-8826-ec0ff7d90139-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b9aae903-8070-44c6-8826-ec0ff7d90139\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.414579 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9aae903-8070-44c6-8826-ec0ff7d90139-logs\") pod \"glance-default-internal-api-0\" (UID: \"b9aae903-8070-44c6-8826-ec0ff7d90139\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.414627 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9aae903-8070-44c6-8826-ec0ff7d90139-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b9aae903-8070-44c6-8826-ec0ff7d90139\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.414682 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b9aae903-8070-44c6-8826-ec0ff7d90139-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b9aae903-8070-44c6-8826-ec0ff7d90139\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.414706 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"b9aae903-8070-44c6-8826-ec0ff7d90139\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.414734 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq87j\" (UniqueName: \"kubernetes.io/projected/b9aae903-8070-44c6-8826-ec0ff7d90139-kube-api-access-fq87j\") pod \"glance-default-internal-api-0\" (UID: \"b9aae903-8070-44c6-8826-ec0ff7d90139\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.516907 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9aae903-8070-44c6-8826-ec0ff7d90139-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b9aae903-8070-44c6-8826-ec0ff7d90139\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.517899 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b9aae903-8070-44c6-8826-ec0ff7d90139-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b9aae903-8070-44c6-8826-ec0ff7d90139\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.517966 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"b9aae903-8070-44c6-8826-ec0ff7d90139\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.518315 4667 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"b9aae903-8070-44c6-8826-ec0ff7d90139\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.518430 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b9aae903-8070-44c6-8826-ec0ff7d90139-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b9aae903-8070-44c6-8826-ec0ff7d90139\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.518004 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq87j\" (UniqueName: \"kubernetes.io/projected/b9aae903-8070-44c6-8826-ec0ff7d90139-kube-api-access-fq87j\") pod \"glance-default-internal-api-0\" (UID: \"b9aae903-8070-44c6-8826-ec0ff7d90139\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.520675 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9aae903-8070-44c6-8826-ec0ff7d90139-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b9aae903-8070-44c6-8826-ec0ff7d90139\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.520750 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9aae903-8070-44c6-8826-ec0ff7d90139-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b9aae903-8070-44c6-8826-ec0ff7d90139\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.520784 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9aae903-8070-44c6-8826-ec0ff7d90139-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b9aae903-8070-44c6-8826-ec0ff7d90139\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.520809 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9aae903-8070-44c6-8826-ec0ff7d90139-logs\") pod \"glance-default-internal-api-0\" (UID: \"b9aae903-8070-44c6-8826-ec0ff7d90139\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.528596 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9aae903-8070-44c6-8826-ec0ff7d90139-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b9aae903-8070-44c6-8826-ec0ff7d90139\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.529827 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9aae903-8070-44c6-8826-ec0ff7d90139-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b9aae903-8070-44c6-8826-ec0ff7d90139\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.530404 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9aae903-8070-44c6-8826-ec0ff7d90139-logs\") pod \"glance-default-internal-api-0\" (UID: \"b9aae903-8070-44c6-8826-ec0ff7d90139\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.530775 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9aae903-8070-44c6-8826-ec0ff7d90139-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b9aae903-8070-44c6-8826-ec0ff7d90139\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.539590 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9aae903-8070-44c6-8826-ec0ff7d90139-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b9aae903-8070-44c6-8826-ec0ff7d90139\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.543225 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq87j\" (UniqueName: \"kubernetes.io/projected/b9aae903-8070-44c6-8826-ec0ff7d90139-kube-api-access-fq87j\") pod \"glance-default-internal-api-0\" (UID: \"b9aae903-8070-44c6-8826-ec0ff7d90139\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.552065 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"b9aae903-8070-44c6-8826-ec0ff7d90139\") " pod="openstack/glance-default-internal-api-0" Jan 31 04:08:59 crc kubenswrapper[4667]: I0131 04:08:59.623085 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 31 04:09:00 crc kubenswrapper[4667]: I0131 04:09:00.290440 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 31 04:09:01 crc kubenswrapper[4667]: I0131 04:09:01.175353 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b9aae903-8070-44c6-8826-ec0ff7d90139","Type":"ContainerStarted","Data":"ab4c7f017dfd663e7df74bfe0055b7eb22a01528c64ea5eeff76c8196d599c66"} Jan 31 04:09:01 crc kubenswrapper[4667]: I0131 04:09:01.175745 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b9aae903-8070-44c6-8826-ec0ff7d90139","Type":"ContainerStarted","Data":"77f4d5280fe0fa04d2ecc002186cf12f3345422698efbabe1a6e856b6ad93546"} Jan 31 04:09:01 crc kubenswrapper[4667]: I0131 04:09:01.755499 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-78789d8f44-5trmc" podUID="b7f8fd18-06a0-432e-8c17-c9b432b6ca69" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 31 04:09:01 crc kubenswrapper[4667]: I0131 04:09:01.845017 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-86c748c4d6-2grmh" podUID="c6974567-3bea-447a-bb8b-ced22b6d34ce" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Jan 31 04:09:02 crc kubenswrapper[4667]: I0131 04:09:02.185999 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b9aae903-8070-44c6-8826-ec0ff7d90139","Type":"ContainerStarted","Data":"a02f373c04fc2091e1743aef1a8c07f7ce016c55f07e4fd59c3bfc873162229d"} Jan 31 04:09:02 crc kubenswrapper[4667]: I0131 04:09:02.204945 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.204924752 podStartE2EDuration="3.204924752s" podCreationTimestamp="2026-01-31 04:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:09:02.202894809 +0000 UTC m=+1265.719230108" watchObservedRunningTime="2026-01-31 04:09:02.204924752 +0000 UTC m=+1265.721260051" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.157757 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.204045 4667 generic.go:334] "Generic (PLEG): container finished" podID="8e49e208-fc35-469b-a53d-3c8b392a6bc7" containerID="7e4959f43922ea44045e67b50343a5ffd986dcfde1ed6571be2a9b98b14e3896" exitCode=0 Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.205195 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.205284 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e49e208-fc35-469b-a53d-3c8b392a6bc7","Type":"ContainerDied","Data":"7e4959f43922ea44045e67b50343a5ffd986dcfde1ed6571be2a9b98b14e3896"} Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.205401 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e49e208-fc35-469b-a53d-3c8b392a6bc7","Type":"ContainerDied","Data":"6c3e380cad194af4de2378c9721605703d0c9c34bdbebc6536e87be7d7e44e9f"} Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.205490 4667 scope.go:117] "RemoveContainer" containerID="a28a68de9577b7425cd1da116336630113e3c1d52663f4493b7570ed94162c06" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.205703 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-px26d\" (UniqueName: \"kubernetes.io/projected/8e49e208-fc35-469b-a53d-3c8b392a6bc7-kube-api-access-px26d\") pod \"8e49e208-fc35-469b-a53d-3c8b392a6bc7\" (UID: \"8e49e208-fc35-469b-a53d-3c8b392a6bc7\") " Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.205759 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e49e208-fc35-469b-a53d-3c8b392a6bc7-scripts\") pod \"8e49e208-fc35-469b-a53d-3c8b392a6bc7\" (UID: \"8e49e208-fc35-469b-a53d-3c8b392a6bc7\") " Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.205829 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e49e208-fc35-469b-a53d-3c8b392a6bc7-log-httpd\") pod \"8e49e208-fc35-469b-a53d-3c8b392a6bc7\" (UID: \"8e49e208-fc35-469b-a53d-3c8b392a6bc7\") " Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.205904 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e49e208-fc35-469b-a53d-3c8b392a6bc7-config-data\") pod \"8e49e208-fc35-469b-a53d-3c8b392a6bc7\" (UID: \"8e49e208-fc35-469b-a53d-3c8b392a6bc7\") " Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.206004 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e49e208-fc35-469b-a53d-3c8b392a6bc7-combined-ca-bundle\") pod \"8e49e208-fc35-469b-a53d-3c8b392a6bc7\" (UID: \"8e49e208-fc35-469b-a53d-3c8b392a6bc7\") " Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.206096 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8e49e208-fc35-469b-a53d-3c8b392a6bc7-sg-core-conf-yaml\") pod \"8e49e208-fc35-469b-a53d-3c8b392a6bc7\" (UID: \"8e49e208-fc35-469b-a53d-3c8b392a6bc7\") " Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.206157 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e49e208-fc35-469b-a53d-3c8b392a6bc7-run-httpd\") pod \"8e49e208-fc35-469b-a53d-3c8b392a6bc7\" (UID: \"8e49e208-fc35-469b-a53d-3c8b392a6bc7\") " Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.207009 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e49e208-fc35-469b-a53d-3c8b392a6bc7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8e49e208-fc35-469b-a53d-3c8b392a6bc7" (UID: "8e49e208-fc35-469b-a53d-3c8b392a6bc7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.234059 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e49e208-fc35-469b-a53d-3c8b392a6bc7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8e49e208-fc35-469b-a53d-3c8b392a6bc7" (UID: "8e49e208-fc35-469b-a53d-3c8b392a6bc7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.235257 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e49e208-fc35-469b-a53d-3c8b392a6bc7-scripts" (OuterVolumeSpecName: "scripts") pod "8e49e208-fc35-469b-a53d-3c8b392a6bc7" (UID: "8e49e208-fc35-469b-a53d-3c8b392a6bc7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.237711 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e49e208-fc35-469b-a53d-3c8b392a6bc7-kube-api-access-px26d" (OuterVolumeSpecName: "kube-api-access-px26d") pod "8e49e208-fc35-469b-a53d-3c8b392a6bc7" (UID: "8e49e208-fc35-469b-a53d-3c8b392a6bc7"). InnerVolumeSpecName "kube-api-access-px26d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.238347 4667 scope.go:117] "RemoveContainer" containerID="d76a666a12fd5ab4c663aaa18d763b14822e2b424323ae2a8c468ea12d88ca85" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.259988 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e49e208-fc35-469b-a53d-3c8b392a6bc7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8e49e208-fc35-469b-a53d-3c8b392a6bc7" (UID: "8e49e208-fc35-469b-a53d-3c8b392a6bc7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.309647 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-px26d\" (UniqueName: \"kubernetes.io/projected/8e49e208-fc35-469b-a53d-3c8b392a6bc7-kube-api-access-px26d\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.309939 4667 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e49e208-fc35-469b-a53d-3c8b392a6bc7-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.310021 4667 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e49e208-fc35-469b-a53d-3c8b392a6bc7-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.310083 4667 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8e49e208-fc35-469b-a53d-3c8b392a6bc7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.310145 4667 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e49e208-fc35-469b-a53d-3c8b392a6bc7-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.355673 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e49e208-fc35-469b-a53d-3c8b392a6bc7-config-data" (OuterVolumeSpecName: "config-data") pod "8e49e208-fc35-469b-a53d-3c8b392a6bc7" (UID: "8e49e208-fc35-469b-a53d-3c8b392a6bc7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.357185 4667 scope.go:117] "RemoveContainer" containerID="b41188e39946c0f57b973291db1df1bb50eb58ae5fe1231e2163c4925dce5198" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.362966 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e49e208-fc35-469b-a53d-3c8b392a6bc7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e49e208-fc35-469b-a53d-3c8b392a6bc7" (UID: "8e49e208-fc35-469b-a53d-3c8b392a6bc7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.388433 4667 scope.go:117] "RemoveContainer" containerID="7e4959f43922ea44045e67b50343a5ffd986dcfde1ed6571be2a9b98b14e3896" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.418624 4667 scope.go:117] "RemoveContainer" containerID="a28a68de9577b7425cd1da116336630113e3c1d52663f4493b7570ed94162c06" Jan 31 04:09:03 crc kubenswrapper[4667]: E0131 04:09:03.424251 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a28a68de9577b7425cd1da116336630113e3c1d52663f4493b7570ed94162c06\": container with ID starting with a28a68de9577b7425cd1da116336630113e3c1d52663f4493b7570ed94162c06 not found: ID does not exist" containerID="a28a68de9577b7425cd1da116336630113e3c1d52663f4493b7570ed94162c06" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.424364 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a28a68de9577b7425cd1da116336630113e3c1d52663f4493b7570ed94162c06"} err="failed to get container status \"a28a68de9577b7425cd1da116336630113e3c1d52663f4493b7570ed94162c06\": rpc error: code = NotFound desc = could not find container \"a28a68de9577b7425cd1da116336630113e3c1d52663f4493b7570ed94162c06\": container with ID starting with a28a68de9577b7425cd1da116336630113e3c1d52663f4493b7570ed94162c06 not found: ID does not exist" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.424452 4667 scope.go:117] "RemoveContainer" containerID="d76a666a12fd5ab4c663aaa18d763b14822e2b424323ae2a8c468ea12d88ca85" Jan 31 04:09:03 crc kubenswrapper[4667]: E0131 04:09:03.425244 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d76a666a12fd5ab4c663aaa18d763b14822e2b424323ae2a8c468ea12d88ca85\": container with ID starting with d76a666a12fd5ab4c663aaa18d763b14822e2b424323ae2a8c468ea12d88ca85 not found: ID does not exist" containerID="d76a666a12fd5ab4c663aaa18d763b14822e2b424323ae2a8c468ea12d88ca85" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.425352 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d76a666a12fd5ab4c663aaa18d763b14822e2b424323ae2a8c468ea12d88ca85"} err="failed to get container status \"d76a666a12fd5ab4c663aaa18d763b14822e2b424323ae2a8c468ea12d88ca85\": rpc error: code = NotFound desc = could not find container \"d76a666a12fd5ab4c663aaa18d763b14822e2b424323ae2a8c468ea12d88ca85\": container with ID starting with d76a666a12fd5ab4c663aaa18d763b14822e2b424323ae2a8c468ea12d88ca85 not found: ID does not exist" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.425447 4667 scope.go:117] "RemoveContainer" containerID="b41188e39946c0f57b973291db1df1bb50eb58ae5fe1231e2163c4925dce5198" Jan 31 04:09:03 crc kubenswrapper[4667]: E0131 04:09:03.426079 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b41188e39946c0f57b973291db1df1bb50eb58ae5fe1231e2163c4925dce5198\": container with ID starting with b41188e39946c0f57b973291db1df1bb50eb58ae5fe1231e2163c4925dce5198 not found: ID does not exist" containerID="b41188e39946c0f57b973291db1df1bb50eb58ae5fe1231e2163c4925dce5198" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.426183 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b41188e39946c0f57b973291db1df1bb50eb58ae5fe1231e2163c4925dce5198"} err="failed to get container status \"b41188e39946c0f57b973291db1df1bb50eb58ae5fe1231e2163c4925dce5198\": rpc error: code = NotFound desc = could not find container \"b41188e39946c0f57b973291db1df1bb50eb58ae5fe1231e2163c4925dce5198\": container with ID starting with b41188e39946c0f57b973291db1df1bb50eb58ae5fe1231e2163c4925dce5198 not found: ID does not exist" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.426277 4667 scope.go:117] "RemoveContainer" containerID="7e4959f43922ea44045e67b50343a5ffd986dcfde1ed6571be2a9b98b14e3896" Jan 31 04:09:03 crc kubenswrapper[4667]: E0131 04:09:03.426560 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e4959f43922ea44045e67b50343a5ffd986dcfde1ed6571be2a9b98b14e3896\": container with ID starting with 7e4959f43922ea44045e67b50343a5ffd986dcfde1ed6571be2a9b98b14e3896 not found: ID does not exist" containerID="7e4959f43922ea44045e67b50343a5ffd986dcfde1ed6571be2a9b98b14e3896" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.426657 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e4959f43922ea44045e67b50343a5ffd986dcfde1ed6571be2a9b98b14e3896"} err="failed to get container status \"7e4959f43922ea44045e67b50343a5ffd986dcfde1ed6571be2a9b98b14e3896\": rpc error: code = NotFound desc = could not find container \"7e4959f43922ea44045e67b50343a5ffd986dcfde1ed6571be2a9b98b14e3896\": container with ID starting with 7e4959f43922ea44045e67b50343a5ffd986dcfde1ed6571be2a9b98b14e3896 not found: ID does not exist" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.438859 4667 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e49e208-fc35-469b-a53d-3c8b392a6bc7-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.438893 4667 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e49e208-fc35-469b-a53d-3c8b392a6bc7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.548562 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.557774 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.590761 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:09:03 crc kubenswrapper[4667]: E0131 04:09:03.591457 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e49e208-fc35-469b-a53d-3c8b392a6bc7" containerName="sg-core" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.591523 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e49e208-fc35-469b-a53d-3c8b392a6bc7" containerName="sg-core" Jan 31 04:09:03 crc kubenswrapper[4667]: E0131 04:09:03.591617 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e49e208-fc35-469b-a53d-3c8b392a6bc7" containerName="proxy-httpd" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.591671 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e49e208-fc35-469b-a53d-3c8b392a6bc7" containerName="proxy-httpd" Jan 31 04:09:03 crc kubenswrapper[4667]: E0131 04:09:03.591728 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e49e208-fc35-469b-a53d-3c8b392a6bc7" containerName="ceilometer-notification-agent" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.591776 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e49e208-fc35-469b-a53d-3c8b392a6bc7" containerName="ceilometer-notification-agent" Jan 31 04:09:03 crc kubenswrapper[4667]: E0131 04:09:03.591857 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e49e208-fc35-469b-a53d-3c8b392a6bc7" containerName="ceilometer-central-agent" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.591922 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e49e208-fc35-469b-a53d-3c8b392a6bc7" containerName="ceilometer-central-agent" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.592140 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e49e208-fc35-469b-a53d-3c8b392a6bc7" containerName="proxy-httpd" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.592216 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e49e208-fc35-469b-a53d-3c8b392a6bc7" containerName="ceilometer-central-agent" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.592274 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e49e208-fc35-469b-a53d-3c8b392a6bc7" containerName="ceilometer-notification-agent" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.592341 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e49e208-fc35-469b-a53d-3c8b392a6bc7" containerName="sg-core" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.594035 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.600961 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.601577 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.628478 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.645975 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae80f16a-2fb7-4994-b184-27c1e5fe52b6-scripts\") pod \"ceilometer-0\" (UID: \"ae80f16a-2fb7-4994-b184-27c1e5fe52b6\") " pod="openstack/ceilometer-0" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.646036 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae80f16a-2fb7-4994-b184-27c1e5fe52b6-log-httpd\") pod \"ceilometer-0\" (UID: \"ae80f16a-2fb7-4994-b184-27c1e5fe52b6\") " pod="openstack/ceilometer-0" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.646081 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae80f16a-2fb7-4994-b184-27c1e5fe52b6-config-data\") pod \"ceilometer-0\" (UID: \"ae80f16a-2fb7-4994-b184-27c1e5fe52b6\") " pod="openstack/ceilometer-0" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.646103 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae80f16a-2fb7-4994-b184-27c1e5fe52b6-run-httpd\") pod \"ceilometer-0\" (UID: \"ae80f16a-2fb7-4994-b184-27c1e5fe52b6\") " pod="openstack/ceilometer-0" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.646121 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae80f16a-2fb7-4994-b184-27c1e5fe52b6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ae80f16a-2fb7-4994-b184-27c1e5fe52b6\") " pod="openstack/ceilometer-0" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.646173 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghg84\" (UniqueName: \"kubernetes.io/projected/ae80f16a-2fb7-4994-b184-27c1e5fe52b6-kube-api-access-ghg84\") pod \"ceilometer-0\" (UID: \"ae80f16a-2fb7-4994-b184-27c1e5fe52b6\") " pod="openstack/ceilometer-0" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.646217 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ae80f16a-2fb7-4994-b184-27c1e5fe52b6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ae80f16a-2fb7-4994-b184-27c1e5fe52b6\") " pod="openstack/ceilometer-0" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.766288 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghg84\" (UniqueName: \"kubernetes.io/projected/ae80f16a-2fb7-4994-b184-27c1e5fe52b6-kube-api-access-ghg84\") pod \"ceilometer-0\" (UID: \"ae80f16a-2fb7-4994-b184-27c1e5fe52b6\") " pod="openstack/ceilometer-0" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.766953 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ae80f16a-2fb7-4994-b184-27c1e5fe52b6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ae80f16a-2fb7-4994-b184-27c1e5fe52b6\") " pod="openstack/ceilometer-0" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.767015 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae80f16a-2fb7-4994-b184-27c1e5fe52b6-scripts\") pod \"ceilometer-0\" (UID: \"ae80f16a-2fb7-4994-b184-27c1e5fe52b6\") " pod="openstack/ceilometer-0" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.767125 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae80f16a-2fb7-4994-b184-27c1e5fe52b6-log-httpd\") pod \"ceilometer-0\" (UID: \"ae80f16a-2fb7-4994-b184-27c1e5fe52b6\") " pod="openstack/ceilometer-0" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.767240 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae80f16a-2fb7-4994-b184-27c1e5fe52b6-config-data\") pod \"ceilometer-0\" (UID: \"ae80f16a-2fb7-4994-b184-27c1e5fe52b6\") " pod="openstack/ceilometer-0" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.767284 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae80f16a-2fb7-4994-b184-27c1e5fe52b6-run-httpd\") pod \"ceilometer-0\" (UID: \"ae80f16a-2fb7-4994-b184-27c1e5fe52b6\") " pod="openstack/ceilometer-0" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.767303 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae80f16a-2fb7-4994-b184-27c1e5fe52b6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ae80f16a-2fb7-4994-b184-27c1e5fe52b6\") " pod="openstack/ceilometer-0" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.774433 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae80f16a-2fb7-4994-b184-27c1e5fe52b6-log-httpd\") pod \"ceilometer-0\" (UID: \"ae80f16a-2fb7-4994-b184-27c1e5fe52b6\") " pod="openstack/ceilometer-0" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.777983 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae80f16a-2fb7-4994-b184-27c1e5fe52b6-scripts\") pod \"ceilometer-0\" (UID: \"ae80f16a-2fb7-4994-b184-27c1e5fe52b6\") " pod="openstack/ceilometer-0" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.779750 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae80f16a-2fb7-4994-b184-27c1e5fe52b6-run-httpd\") pod \"ceilometer-0\" (UID: \"ae80f16a-2fb7-4994-b184-27c1e5fe52b6\") " pod="openstack/ceilometer-0" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.783567 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae80f16a-2fb7-4994-b184-27c1e5fe52b6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ae80f16a-2fb7-4994-b184-27c1e5fe52b6\") " pod="openstack/ceilometer-0" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.786211 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae80f16a-2fb7-4994-b184-27c1e5fe52b6-config-data\") pod \"ceilometer-0\" (UID: \"ae80f16a-2fb7-4994-b184-27c1e5fe52b6\") " pod="openstack/ceilometer-0" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.805590 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ae80f16a-2fb7-4994-b184-27c1e5fe52b6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ae80f16a-2fb7-4994-b184-27c1e5fe52b6\") " pod="openstack/ceilometer-0" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.815064 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghg84\" (UniqueName: \"kubernetes.io/projected/ae80f16a-2fb7-4994-b184-27c1e5fe52b6-kube-api-access-ghg84\") pod \"ceilometer-0\" (UID: \"ae80f16a-2fb7-4994-b184-27c1e5fe52b6\") " pod="openstack/ceilometer-0" Jan 31 04:09:03 crc kubenswrapper[4667]: I0131 04:09:03.944831 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:09:04 crc kubenswrapper[4667]: I0131 04:09:04.517563 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:09:04 crc kubenswrapper[4667]: I0131 04:09:04.647427 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 31 04:09:04 crc kubenswrapper[4667]: I0131 04:09:04.647486 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 31 04:09:04 crc kubenswrapper[4667]: I0131 04:09:04.683484 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 31 04:09:04 crc kubenswrapper[4667]: I0131 04:09:04.692767 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 31 04:09:05 crc kubenswrapper[4667]: I0131 04:09:05.232108 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae80f16a-2fb7-4994-b184-27c1e5fe52b6","Type":"ContainerStarted","Data":"10efd47238628ff8b941fd0cbbf6bdba9c67d4b0c324fd760ea664df8fad00cf"} Jan 31 04:09:05 crc kubenswrapper[4667]: I0131 04:09:05.232442 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 31 04:09:05 crc kubenswrapper[4667]: I0131 04:09:05.232461 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 31 04:09:05 crc kubenswrapper[4667]: I0131 04:09:05.295009 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e49e208-fc35-469b-a53d-3c8b392a6bc7" path="/var/lib/kubelet/pods/8e49e208-fc35-469b-a53d-3c8b392a6bc7/volumes" Jan 31 04:09:06 crc kubenswrapper[4667]: I0131 04:09:06.249617 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae80f16a-2fb7-4994-b184-27c1e5fe52b6","Type":"ContainerStarted","Data":"50d6382f5e324e87e926b7fa6c6be8ee503105ace1dd34b5dcb206c154cd6686"} Jan 31 04:09:06 crc kubenswrapper[4667]: I0131 04:09:06.250334 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae80f16a-2fb7-4994-b184-27c1e5fe52b6","Type":"ContainerStarted","Data":"9b61a14797d90327329ffd0026584c2dbcf88a931f511747e78aed327e3186a8"} Jan 31 04:09:07 crc kubenswrapper[4667]: I0131 04:09:07.262031 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae80f16a-2fb7-4994-b184-27c1e5fe52b6","Type":"ContainerStarted","Data":"7bb0466bd935eeeb0b3bca71b7fc0eb44ebb3b09804b8e24692d376c38feba87"} Jan 31 04:09:07 crc kubenswrapper[4667]: I0131 04:09:07.734557 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 31 04:09:07 crc kubenswrapper[4667]: I0131 04:09:07.734723 4667 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 04:09:07 crc kubenswrapper[4667]: I0131 04:09:07.738576 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 31 04:09:09 crc kubenswrapper[4667]: I0131 04:09:09.309371 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae80f16a-2fb7-4994-b184-27c1e5fe52b6","Type":"ContainerStarted","Data":"28b4941c66dccf2a7d7b22f3be35dbaa54d16e2f2153dfa5dee5f1691a423311"} Jan 31 04:09:09 crc kubenswrapper[4667]: I0131 04:09:09.310064 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 04:09:09 crc kubenswrapper[4667]: I0131 04:09:09.353245 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.927196766 podStartE2EDuration="6.353219236s" podCreationTimestamp="2026-01-31 04:09:03 +0000 UTC" firstStartedPulling="2026-01-31 04:09:04.524761323 +0000 UTC m=+1268.041096622" lastFinishedPulling="2026-01-31 04:09:08.950783803 +0000 UTC m=+1272.467119092" observedRunningTime="2026-01-31 04:09:09.334447258 +0000 UTC m=+1272.850782557" watchObservedRunningTime="2026-01-31 04:09:09.353219236 +0000 UTC m=+1272.869554535" Jan 31 04:09:09 crc kubenswrapper[4667]: I0131 04:09:09.624079 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 31 04:09:09 crc kubenswrapper[4667]: I0131 04:09:09.624169 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 31 04:09:09 crc kubenswrapper[4667]: I0131 04:09:09.677941 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 31 04:09:09 crc kubenswrapper[4667]: I0131 04:09:09.701518 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 31 04:09:10 crc kubenswrapper[4667]: I0131 04:09:10.319771 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 31 04:09:10 crc kubenswrapper[4667]: I0131 04:09:10.320372 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 31 04:09:11 crc kubenswrapper[4667]: I0131 04:09:11.755953 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-78789d8f44-5trmc" podUID="b7f8fd18-06a0-432e-8c17-c9b432b6ca69" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 31 04:09:11 crc kubenswrapper[4667]: I0131 04:09:11.756052 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-78789d8f44-5trmc" Jan 31 04:09:11 crc kubenswrapper[4667]: I0131 04:09:11.756997 4667 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"d51854ff784d64b2b3584b6cdda45491a29c7d1089ddf69708469cfc6e98fccc"} pod="openstack/horizon-78789d8f44-5trmc" containerMessage="Container horizon failed startup probe, will be restarted" Jan 31 04:09:11 crc kubenswrapper[4667]: I0131 04:09:11.757042 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-78789d8f44-5trmc" podUID="b7f8fd18-06a0-432e-8c17-c9b432b6ca69" containerName="horizon" containerID="cri-o://d51854ff784d64b2b3584b6cdda45491a29c7d1089ddf69708469cfc6e98fccc" gracePeriod=30 Jan 31 04:09:11 crc kubenswrapper[4667]: I0131 04:09:11.844308 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-86c748c4d6-2grmh" podUID="c6974567-3bea-447a-bb8b-ced22b6d34ce" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Jan 31 04:09:11 crc kubenswrapper[4667]: I0131 04:09:11.844916 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-86c748c4d6-2grmh" Jan 31 04:09:11 crc kubenswrapper[4667]: I0131 04:09:11.846242 4667 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"8585ef04e351d14473c07be1275ec2c6840212275304d32bbdccbfc70cb910c8"} pod="openstack/horizon-86c748c4d6-2grmh" containerMessage="Container horizon failed startup probe, will be restarted" Jan 31 04:09:11 crc kubenswrapper[4667]: I0131 04:09:11.846287 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-86c748c4d6-2grmh" podUID="c6974567-3bea-447a-bb8b-ced22b6d34ce" containerName="horizon" containerID="cri-o://8585ef04e351d14473c07be1275ec2c6840212275304d32bbdccbfc70cb910c8" gracePeriod=30 Jan 31 04:09:13 crc kubenswrapper[4667]: I0131 04:09:13.381956 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 31 04:09:13 crc kubenswrapper[4667]: I0131 04:09:13.382106 4667 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 04:09:13 crc kubenswrapper[4667]: I0131 04:09:13.507553 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 31 04:09:13 crc kubenswrapper[4667]: I0131 04:09:13.986576 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-x9gc7"] Jan 31 04:09:13 crc kubenswrapper[4667]: I0131 04:09:13.988235 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-x9gc7" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.010843 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-x9gc7"] Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.131202 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-8q9zm"] Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.134485 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8q9zm" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.135690 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8vr9\" (UniqueName: \"kubernetes.io/projected/73be4b20-cf7a-430b-995f-07f3475b064c-kube-api-access-h8vr9\") pod \"nova-api-db-create-x9gc7\" (UID: \"73be4b20-cf7a-430b-995f-07f3475b064c\") " pod="openstack/nova-api-db-create-x9gc7" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.135771 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73be4b20-cf7a-430b-995f-07f3475b064c-operator-scripts\") pod \"nova-api-db-create-x9gc7\" (UID: \"73be4b20-cf7a-430b-995f-07f3475b064c\") " pod="openstack/nova-api-db-create-x9gc7" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.214958 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-8q9zm"] Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.246006 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cce7e8e8-08b1-41ef-b1d4-efaba433bec0-operator-scripts\") pod \"nova-cell0-db-create-8q9zm\" (UID: \"cce7e8e8-08b1-41ef-b1d4-efaba433bec0\") " pod="openstack/nova-cell0-db-create-8q9zm" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.246078 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8vr9\" (UniqueName: \"kubernetes.io/projected/73be4b20-cf7a-430b-995f-07f3475b064c-kube-api-access-h8vr9\") pod \"nova-api-db-create-x9gc7\" (UID: \"73be4b20-cf7a-430b-995f-07f3475b064c\") " pod="openstack/nova-api-db-create-x9gc7" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.246103 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nl88\" (UniqueName: \"kubernetes.io/projected/cce7e8e8-08b1-41ef-b1d4-efaba433bec0-kube-api-access-4nl88\") pod \"nova-cell0-db-create-8q9zm\" (UID: \"cce7e8e8-08b1-41ef-b1d4-efaba433bec0\") " pod="openstack/nova-cell0-db-create-8q9zm" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.246147 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73be4b20-cf7a-430b-995f-07f3475b064c-operator-scripts\") pod \"nova-api-db-create-x9gc7\" (UID: \"73be4b20-cf7a-430b-995f-07f3475b064c\") " pod="openstack/nova-api-db-create-x9gc7" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.247557 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73be4b20-cf7a-430b-995f-07f3475b064c-operator-scripts\") pod \"nova-api-db-create-x9gc7\" (UID: \"73be4b20-cf7a-430b-995f-07f3475b064c\") " pod="openstack/nova-api-db-create-x9gc7" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.253538 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-5055-account-create-update-2rcb7"] Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.255070 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-5055-account-create-update-2rcb7" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.262096 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.314309 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-5055-account-create-update-2rcb7"] Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.365555 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8vr9\" (UniqueName: \"kubernetes.io/projected/73be4b20-cf7a-430b-995f-07f3475b064c-kube-api-access-h8vr9\") pod \"nova-api-db-create-x9gc7\" (UID: \"73be4b20-cf7a-430b-995f-07f3475b064c\") " pod="openstack/nova-api-db-create-x9gc7" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.373059 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-bnv28"] Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.373805 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8x86\" (UniqueName: \"kubernetes.io/projected/492ab7ad-86ee-4c40-a924-bd5cd948a4dd-kube-api-access-k8x86\") pod \"nova-api-5055-account-create-update-2rcb7\" (UID: \"492ab7ad-86ee-4c40-a924-bd5cd948a4dd\") " pod="openstack/nova-api-5055-account-create-update-2rcb7" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.374071 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cce7e8e8-08b1-41ef-b1d4-efaba433bec0-operator-scripts\") pod \"nova-cell0-db-create-8q9zm\" (UID: \"cce7e8e8-08b1-41ef-b1d4-efaba433bec0\") " pod="openstack/nova-cell0-db-create-8q9zm" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.374112 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/492ab7ad-86ee-4c40-a924-bd5cd948a4dd-operator-scripts\") pod \"nova-api-5055-account-create-update-2rcb7\" (UID: \"492ab7ad-86ee-4c40-a924-bd5cd948a4dd\") " pod="openstack/nova-api-5055-account-create-update-2rcb7" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.374144 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nl88\" (UniqueName: \"kubernetes.io/projected/cce7e8e8-08b1-41ef-b1d4-efaba433bec0-kube-api-access-4nl88\") pod \"nova-cell0-db-create-8q9zm\" (UID: \"cce7e8e8-08b1-41ef-b1d4-efaba433bec0\") " pod="openstack/nova-cell0-db-create-8q9zm" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.375702 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cce7e8e8-08b1-41ef-b1d4-efaba433bec0-operator-scripts\") pod \"nova-cell0-db-create-8q9zm\" (UID: \"cce7e8e8-08b1-41ef-b1d4-efaba433bec0\") " pod="openstack/nova-cell0-db-create-8q9zm" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.377082 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bnv28" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.393830 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-bnv28"] Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.439784 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-916e-account-create-update-ps447"] Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.441372 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-916e-account-create-update-ps447" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.450137 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.461082 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nl88\" (UniqueName: \"kubernetes.io/projected/cce7e8e8-08b1-41ef-b1d4-efaba433bec0-kube-api-access-4nl88\") pod \"nova-cell0-db-create-8q9zm\" (UID: \"cce7e8e8-08b1-41ef-b1d4-efaba433bec0\") " pod="openstack/nova-cell0-db-create-8q9zm" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.468752 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8q9zm" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.480207 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/492ab7ad-86ee-4c40-a924-bd5cd948a4dd-operator-scripts\") pod \"nova-api-5055-account-create-update-2rcb7\" (UID: \"492ab7ad-86ee-4c40-a924-bd5cd948a4dd\") " pod="openstack/nova-api-5055-account-create-update-2rcb7" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.480326 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdpcq\" (UniqueName: \"kubernetes.io/projected/a405aa98-d9c4-4ee1-90bb-da0cc8e09301-kube-api-access-jdpcq\") pod \"nova-cell0-916e-account-create-update-ps447\" (UID: \"a405aa98-d9c4-4ee1-90bb-da0cc8e09301\") " pod="openstack/nova-cell0-916e-account-create-update-ps447" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.480393 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b1f7066-97d5-4fbf-915e-06d9ed522000-operator-scripts\") pod \"nova-cell1-db-create-bnv28\" (UID: \"6b1f7066-97d5-4fbf-915e-06d9ed522000\") " pod="openstack/nova-cell1-db-create-bnv28" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.480477 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8x86\" (UniqueName: \"kubernetes.io/projected/492ab7ad-86ee-4c40-a924-bd5cd948a4dd-kube-api-access-k8x86\") pod \"nova-api-5055-account-create-update-2rcb7\" (UID: \"492ab7ad-86ee-4c40-a924-bd5cd948a4dd\") " pod="openstack/nova-api-5055-account-create-update-2rcb7" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.494535 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a405aa98-d9c4-4ee1-90bb-da0cc8e09301-operator-scripts\") pod \"nova-cell0-916e-account-create-update-ps447\" (UID: \"a405aa98-d9c4-4ee1-90bb-da0cc8e09301\") " pod="openstack/nova-cell0-916e-account-create-update-ps447" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.494683 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnqb8\" (UniqueName: \"kubernetes.io/projected/6b1f7066-97d5-4fbf-915e-06d9ed522000-kube-api-access-fnqb8\") pod \"nova-cell1-db-create-bnv28\" (UID: \"6b1f7066-97d5-4fbf-915e-06d9ed522000\") " pod="openstack/nova-cell1-db-create-bnv28" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.495617 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/492ab7ad-86ee-4c40-a924-bd5cd948a4dd-operator-scripts\") pod \"nova-api-5055-account-create-update-2rcb7\" (UID: \"492ab7ad-86ee-4c40-a924-bd5cd948a4dd\") " pod="openstack/nova-api-5055-account-create-update-2rcb7" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.513284 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-916e-account-create-update-ps447"] Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.555670 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8x86\" (UniqueName: \"kubernetes.io/projected/492ab7ad-86ee-4c40-a924-bd5cd948a4dd-kube-api-access-k8x86\") pod \"nova-api-5055-account-create-update-2rcb7\" (UID: \"492ab7ad-86ee-4c40-a924-bd5cd948a4dd\") " pod="openstack/nova-api-5055-account-create-update-2rcb7" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.598253 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b1f7066-97d5-4fbf-915e-06d9ed522000-operator-scripts\") pod \"nova-cell1-db-create-bnv28\" (UID: \"6b1f7066-97d5-4fbf-915e-06d9ed522000\") " pod="openstack/nova-cell1-db-create-bnv28" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.598379 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a405aa98-d9c4-4ee1-90bb-da0cc8e09301-operator-scripts\") pod \"nova-cell0-916e-account-create-update-ps447\" (UID: \"a405aa98-d9c4-4ee1-90bb-da0cc8e09301\") " pod="openstack/nova-cell0-916e-account-create-update-ps447" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.598414 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnqb8\" (UniqueName: \"kubernetes.io/projected/6b1f7066-97d5-4fbf-915e-06d9ed522000-kube-api-access-fnqb8\") pod \"nova-cell1-db-create-bnv28\" (UID: \"6b1f7066-97d5-4fbf-915e-06d9ed522000\") " pod="openstack/nova-cell1-db-create-bnv28" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.598475 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdpcq\" (UniqueName: \"kubernetes.io/projected/a405aa98-d9c4-4ee1-90bb-da0cc8e09301-kube-api-access-jdpcq\") pod \"nova-cell0-916e-account-create-update-ps447\" (UID: \"a405aa98-d9c4-4ee1-90bb-da0cc8e09301\") " pod="openstack/nova-cell0-916e-account-create-update-ps447" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.599708 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b1f7066-97d5-4fbf-915e-06d9ed522000-operator-scripts\") pod \"nova-cell1-db-create-bnv28\" (UID: \"6b1f7066-97d5-4fbf-915e-06d9ed522000\") " pod="openstack/nova-cell1-db-create-bnv28" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.600256 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a405aa98-d9c4-4ee1-90bb-da0cc8e09301-operator-scripts\") pod \"nova-cell0-916e-account-create-update-ps447\" (UID: \"a405aa98-d9c4-4ee1-90bb-da0cc8e09301\") " pod="openstack/nova-cell0-916e-account-create-update-ps447" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.600561 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-5055-account-create-update-2rcb7" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.609107 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-x9gc7" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.646554 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnqb8\" (UniqueName: \"kubernetes.io/projected/6b1f7066-97d5-4fbf-915e-06d9ed522000-kube-api-access-fnqb8\") pod \"nova-cell1-db-create-bnv28\" (UID: \"6b1f7066-97d5-4fbf-915e-06d9ed522000\") " pod="openstack/nova-cell1-db-create-bnv28" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.655539 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdpcq\" (UniqueName: \"kubernetes.io/projected/a405aa98-d9c4-4ee1-90bb-da0cc8e09301-kube-api-access-jdpcq\") pod \"nova-cell0-916e-account-create-update-ps447\" (UID: \"a405aa98-d9c4-4ee1-90bb-da0cc8e09301\") " pod="openstack/nova-cell0-916e-account-create-update-ps447" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.686543 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-32cf-account-create-update-hsfjj"] Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.688135 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-32cf-account-create-update-hsfjj" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.695564 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.714384 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-32cf-account-create-update-hsfjj"] Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.717216 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bnv28" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.813441 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2xv4\" (UniqueName: \"kubernetes.io/projected/be0254c4-04b0-44bb-96dd-69a9538a9f9e-kube-api-access-g2xv4\") pod \"nova-cell1-32cf-account-create-update-hsfjj\" (UID: \"be0254c4-04b0-44bb-96dd-69a9538a9f9e\") " pod="openstack/nova-cell1-32cf-account-create-update-hsfjj" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.813573 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be0254c4-04b0-44bb-96dd-69a9538a9f9e-operator-scripts\") pod \"nova-cell1-32cf-account-create-update-hsfjj\" (UID: \"be0254c4-04b0-44bb-96dd-69a9538a9f9e\") " pod="openstack/nova-cell1-32cf-account-create-update-hsfjj" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.813792 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-916e-account-create-update-ps447" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.916405 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be0254c4-04b0-44bb-96dd-69a9538a9f9e-operator-scripts\") pod \"nova-cell1-32cf-account-create-update-hsfjj\" (UID: \"be0254c4-04b0-44bb-96dd-69a9538a9f9e\") " pod="openstack/nova-cell1-32cf-account-create-update-hsfjj" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.916967 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2xv4\" (UniqueName: \"kubernetes.io/projected/be0254c4-04b0-44bb-96dd-69a9538a9f9e-kube-api-access-g2xv4\") pod \"nova-cell1-32cf-account-create-update-hsfjj\" (UID: \"be0254c4-04b0-44bb-96dd-69a9538a9f9e\") " pod="openstack/nova-cell1-32cf-account-create-update-hsfjj" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.918325 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be0254c4-04b0-44bb-96dd-69a9538a9f9e-operator-scripts\") pod \"nova-cell1-32cf-account-create-update-hsfjj\" (UID: \"be0254c4-04b0-44bb-96dd-69a9538a9f9e\") " pod="openstack/nova-cell1-32cf-account-create-update-hsfjj" Jan 31 04:09:14 crc kubenswrapper[4667]: I0131 04:09:14.950413 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2xv4\" (UniqueName: \"kubernetes.io/projected/be0254c4-04b0-44bb-96dd-69a9538a9f9e-kube-api-access-g2xv4\") pod \"nova-cell1-32cf-account-create-update-hsfjj\" (UID: \"be0254c4-04b0-44bb-96dd-69a9538a9f9e\") " pod="openstack/nova-cell1-32cf-account-create-update-hsfjj" Jan 31 04:09:15 crc kubenswrapper[4667]: I0131 04:09:15.022877 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-32cf-account-create-update-hsfjj" Jan 31 04:09:15 crc kubenswrapper[4667]: I0131 04:09:15.346360 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-8q9zm"] Jan 31 04:09:15 crc kubenswrapper[4667]: W0131 04:09:15.433552 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcce7e8e8_08b1_41ef_b1d4_efaba433bec0.slice/crio-7c452be56a2a7bb4d5dd8790de0430657f5cdc8c08de6c501765685026173dcb WatchSource:0}: Error finding container 7c452be56a2a7bb4d5dd8790de0430657f5cdc8c08de6c501765685026173dcb: Status 404 returned error can't find the container with id 7c452be56a2a7bb4d5dd8790de0430657f5cdc8c08de6c501765685026173dcb Jan 31 04:09:15 crc kubenswrapper[4667]: I0131 04:09:15.535609 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-5055-account-create-update-2rcb7"] Jan 31 04:09:15 crc kubenswrapper[4667]: I0131 04:09:15.706183 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:09:15 crc kubenswrapper[4667]: I0131 04:09:15.706242 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:09:15 crc kubenswrapper[4667]: I0131 04:09:15.706293 4667 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" Jan 31 04:09:15 crc kubenswrapper[4667]: I0131 04:09:15.707062 4667 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4def2f985a42835fdac5d21069cf64f18010ecd6521e48ae16ef15b594559e50"} pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 04:09:15 crc kubenswrapper[4667]: I0131 04:09:15.707120 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" containerID="cri-o://4def2f985a42835fdac5d21069cf64f18010ecd6521e48ae16ef15b594559e50" gracePeriod=600 Jan 31 04:09:15 crc kubenswrapper[4667]: I0131 04:09:15.904317 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-x9gc7"] Jan 31 04:09:15 crc kubenswrapper[4667]: I0131 04:09:15.937113 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-bnv28"] Jan 31 04:09:16 crc kubenswrapper[4667]: I0131 04:09:16.116379 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-32cf-account-create-update-hsfjj"] Jan 31 04:09:16 crc kubenswrapper[4667]: I0131 04:09:16.162237 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-916e-account-create-update-ps447"] Jan 31 04:09:16 crc kubenswrapper[4667]: I0131 04:09:16.521200 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-916e-account-create-update-ps447" event={"ID":"a405aa98-d9c4-4ee1-90bb-da0cc8e09301","Type":"ContainerStarted","Data":"fea137e37a4031ef95aa776d0037ab33440dad97e198c752f05e294578a27e1d"} Jan 31 04:09:16 crc kubenswrapper[4667]: I0131 04:09:16.531420 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-8q9zm" event={"ID":"cce7e8e8-08b1-41ef-b1d4-efaba433bec0","Type":"ContainerStarted","Data":"df2769d8d82b91d8ab5e821aac77e93796beaec66eb7bfe9a8f0a555e949ec26"} Jan 31 04:09:16 crc kubenswrapper[4667]: I0131 04:09:16.531458 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-8q9zm" event={"ID":"cce7e8e8-08b1-41ef-b1d4-efaba433bec0","Type":"ContainerStarted","Data":"7c452be56a2a7bb4d5dd8790de0430657f5cdc8c08de6c501765685026173dcb"} Jan 31 04:09:16 crc kubenswrapper[4667]: I0131 04:09:16.552732 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-5055-account-create-update-2rcb7" event={"ID":"492ab7ad-86ee-4c40-a924-bd5cd948a4dd","Type":"ContainerStarted","Data":"5489fab3329bbf4878f6363efb6c6b7f3da2406c445cd91f0a1da36d4af710b1"} Jan 31 04:09:16 crc kubenswrapper[4667]: I0131 04:09:16.552799 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-5055-account-create-update-2rcb7" event={"ID":"492ab7ad-86ee-4c40-a924-bd5cd948a4dd","Type":"ContainerStarted","Data":"5913d9ae6134c9a572703d8be7517e68f606dca49fcecd9e957e42779ffcf4e2"} Jan 31 04:09:16 crc kubenswrapper[4667]: I0131 04:09:16.558434 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-8q9zm" podStartSLOduration=2.55842186 podStartE2EDuration="2.55842186s" podCreationTimestamp="2026-01-31 04:09:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:09:16.556364965 +0000 UTC m=+1280.072700264" watchObservedRunningTime="2026-01-31 04:09:16.55842186 +0000 UTC m=+1280.074757159" Jan 31 04:09:16 crc kubenswrapper[4667]: I0131 04:09:16.566685 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-32cf-account-create-update-hsfjj" event={"ID":"be0254c4-04b0-44bb-96dd-69a9538a9f9e","Type":"ContainerStarted","Data":"cf804acdd10ed721d7f1615835750663b2ef970dcf2bc157e639b99ca5b9d74a"} Jan 31 04:09:16 crc kubenswrapper[4667]: I0131 04:09:16.589305 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-x9gc7" event={"ID":"73be4b20-cf7a-430b-995f-07f3475b064c","Type":"ContainerStarted","Data":"03331c518d65e5bd3c7066eaaa3a6c4c87cb6d8b666026c64d9556ea17d9576a"} Jan 31 04:09:16 crc kubenswrapper[4667]: I0131 04:09:16.589380 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-x9gc7" event={"ID":"73be4b20-cf7a-430b-995f-07f3475b064c","Type":"ContainerStarted","Data":"6bcd5598f01c56a3e01c88ddedf5155894681bac11ade9516d7873a37bef5d7a"} Jan 31 04:09:16 crc kubenswrapper[4667]: I0131 04:09:16.596825 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-5055-account-create-update-2rcb7" podStartSLOduration=2.596792858 podStartE2EDuration="2.596792858s" podCreationTimestamp="2026-01-31 04:09:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:09:16.582340605 +0000 UTC m=+1280.098675904" watchObservedRunningTime="2026-01-31 04:09:16.596792858 +0000 UTC m=+1280.113128157" Jan 31 04:09:16 crc kubenswrapper[4667]: I0131 04:09:16.621597 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bnv28" event={"ID":"6b1f7066-97d5-4fbf-915e-06d9ed522000","Type":"ContainerStarted","Data":"90d1156965a1f3f60aa54b48a50767575cbe330c7cebf3ed553a11ea924b4ba5"} Jan 31 04:09:16 crc kubenswrapper[4667]: I0131 04:09:16.622353 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bnv28" event={"ID":"6b1f7066-97d5-4fbf-915e-06d9ed522000","Type":"ContainerStarted","Data":"e1cec544a3903eeb7247f847f60fabb0422afcc89f90ea87d0433f6f247e5fc4"} Jan 31 04:09:16 crc kubenswrapper[4667]: I0131 04:09:16.635777 4667 generic.go:334] "Generic (PLEG): container finished" podID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerID="4def2f985a42835fdac5d21069cf64f18010ecd6521e48ae16ef15b594559e50" exitCode=0 Jan 31 04:09:16 crc kubenswrapper[4667]: I0131 04:09:16.635825 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" event={"ID":"b103bbd2-fb5d-4b2a-8b01-c32f699757df","Type":"ContainerDied","Data":"4def2f985a42835fdac5d21069cf64f18010ecd6521e48ae16ef15b594559e50"} Jan 31 04:09:16 crc kubenswrapper[4667]: I0131 04:09:16.635867 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" event={"ID":"b103bbd2-fb5d-4b2a-8b01-c32f699757df","Type":"ContainerStarted","Data":"f2541fc2fda6b826061d737e4a0c482f1977e25566cf6f78f58956c4922322ef"} Jan 31 04:09:16 crc kubenswrapper[4667]: I0131 04:09:16.635887 4667 scope.go:117] "RemoveContainer" containerID="a2768bea3b08958c54e155e8f29b14218602ccc55cf630ccf4d7736c3b3b12ec" Jan 31 04:09:16 crc kubenswrapper[4667]: I0131 04:09:16.641519 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-x9gc7" podStartSLOduration=3.641496995 podStartE2EDuration="3.641496995s" podCreationTimestamp="2026-01-31 04:09:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:09:16.620312182 +0000 UTC m=+1280.136647481" watchObservedRunningTime="2026-01-31 04:09:16.641496995 +0000 UTC m=+1280.157832294" Jan 31 04:09:16 crc kubenswrapper[4667]: I0131 04:09:16.671500 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-bnv28" podStartSLOduration=2.671472961 podStartE2EDuration="2.671472961s" podCreationTimestamp="2026-01-31 04:09:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:09:16.648932132 +0000 UTC m=+1280.165267431" watchObservedRunningTime="2026-01-31 04:09:16.671472961 +0000 UTC m=+1280.187808260" Jan 31 04:09:17 crc kubenswrapper[4667]: I0131 04:09:17.647219 4667 generic.go:334] "Generic (PLEG): container finished" podID="a405aa98-d9c4-4ee1-90bb-da0cc8e09301" containerID="c5172dd4ff35c2f9ff9790dc52aa49679c1a8d78f8091e15f8d887b09ca20690" exitCode=0 Jan 31 04:09:17 crc kubenswrapper[4667]: I0131 04:09:17.647380 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-916e-account-create-update-ps447" event={"ID":"a405aa98-d9c4-4ee1-90bb-da0cc8e09301","Type":"ContainerDied","Data":"c5172dd4ff35c2f9ff9790dc52aa49679c1a8d78f8091e15f8d887b09ca20690"} Jan 31 04:09:17 crc kubenswrapper[4667]: I0131 04:09:17.652359 4667 generic.go:334] "Generic (PLEG): container finished" podID="cce7e8e8-08b1-41ef-b1d4-efaba433bec0" containerID="df2769d8d82b91d8ab5e821aac77e93796beaec66eb7bfe9a8f0a555e949ec26" exitCode=0 Jan 31 04:09:17 crc kubenswrapper[4667]: I0131 04:09:17.652445 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-8q9zm" event={"ID":"cce7e8e8-08b1-41ef-b1d4-efaba433bec0","Type":"ContainerDied","Data":"df2769d8d82b91d8ab5e821aac77e93796beaec66eb7bfe9a8f0a555e949ec26"} Jan 31 04:09:17 crc kubenswrapper[4667]: I0131 04:09:17.656249 4667 generic.go:334] "Generic (PLEG): container finished" podID="492ab7ad-86ee-4c40-a924-bd5cd948a4dd" containerID="5489fab3329bbf4878f6363efb6c6b7f3da2406c445cd91f0a1da36d4af710b1" exitCode=0 Jan 31 04:09:17 crc kubenswrapper[4667]: I0131 04:09:17.656327 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-5055-account-create-update-2rcb7" event={"ID":"492ab7ad-86ee-4c40-a924-bd5cd948a4dd","Type":"ContainerDied","Data":"5489fab3329bbf4878f6363efb6c6b7f3da2406c445cd91f0a1da36d4af710b1"} Jan 31 04:09:17 crc kubenswrapper[4667]: I0131 04:09:17.660665 4667 generic.go:334] "Generic (PLEG): container finished" podID="be0254c4-04b0-44bb-96dd-69a9538a9f9e" containerID="14d55ff3e972f1bbe1669d82936c183500a7124309b34ea687681c33dd1ea204" exitCode=0 Jan 31 04:09:17 crc kubenswrapper[4667]: I0131 04:09:17.660755 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-32cf-account-create-update-hsfjj" event={"ID":"be0254c4-04b0-44bb-96dd-69a9538a9f9e","Type":"ContainerDied","Data":"14d55ff3e972f1bbe1669d82936c183500a7124309b34ea687681c33dd1ea204"} Jan 31 04:09:17 crc kubenswrapper[4667]: I0131 04:09:17.662948 4667 generic.go:334] "Generic (PLEG): container finished" podID="73be4b20-cf7a-430b-995f-07f3475b064c" containerID="03331c518d65e5bd3c7066eaaa3a6c4c87cb6d8b666026c64d9556ea17d9576a" exitCode=0 Jan 31 04:09:17 crc kubenswrapper[4667]: I0131 04:09:17.662990 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-x9gc7" event={"ID":"73be4b20-cf7a-430b-995f-07f3475b064c","Type":"ContainerDied","Data":"03331c518d65e5bd3c7066eaaa3a6c4c87cb6d8b666026c64d9556ea17d9576a"} Jan 31 04:09:17 crc kubenswrapper[4667]: I0131 04:09:17.665162 4667 generic.go:334] "Generic (PLEG): container finished" podID="6b1f7066-97d5-4fbf-915e-06d9ed522000" containerID="90d1156965a1f3f60aa54b48a50767575cbe330c7cebf3ed553a11ea924b4ba5" exitCode=0 Jan 31 04:09:17 crc kubenswrapper[4667]: I0131 04:09:17.665226 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bnv28" event={"ID":"6b1f7066-97d5-4fbf-915e-06d9ed522000","Type":"ContainerDied","Data":"90d1156965a1f3f60aa54b48a50767575cbe330c7cebf3ed553a11ea924b4ba5"} Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.249096 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-32cf-account-create-update-hsfjj" Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.309529 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be0254c4-04b0-44bb-96dd-69a9538a9f9e-operator-scripts\") pod \"be0254c4-04b0-44bb-96dd-69a9538a9f9e\" (UID: \"be0254c4-04b0-44bb-96dd-69a9538a9f9e\") " Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.309609 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2xv4\" (UniqueName: \"kubernetes.io/projected/be0254c4-04b0-44bb-96dd-69a9538a9f9e-kube-api-access-g2xv4\") pod \"be0254c4-04b0-44bb-96dd-69a9538a9f9e\" (UID: \"be0254c4-04b0-44bb-96dd-69a9538a9f9e\") " Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.312754 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be0254c4-04b0-44bb-96dd-69a9538a9f9e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "be0254c4-04b0-44bb-96dd-69a9538a9f9e" (UID: "be0254c4-04b0-44bb-96dd-69a9538a9f9e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.341237 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be0254c4-04b0-44bb-96dd-69a9538a9f9e-kube-api-access-g2xv4" (OuterVolumeSpecName: "kube-api-access-g2xv4") pod "be0254c4-04b0-44bb-96dd-69a9538a9f9e" (UID: "be0254c4-04b0-44bb-96dd-69a9538a9f9e"). InnerVolumeSpecName "kube-api-access-g2xv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.413433 4667 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be0254c4-04b0-44bb-96dd-69a9538a9f9e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.413480 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2xv4\" (UniqueName: \"kubernetes.io/projected/be0254c4-04b0-44bb-96dd-69a9538a9f9e-kube-api-access-g2xv4\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.532670 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-916e-account-create-update-ps447" Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.575879 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bnv28" Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.604409 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8q9zm" Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.618431 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a405aa98-d9c4-4ee1-90bb-da0cc8e09301-operator-scripts\") pod \"a405aa98-d9c4-4ee1-90bb-da0cc8e09301\" (UID: \"a405aa98-d9c4-4ee1-90bb-da0cc8e09301\") " Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.618617 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdpcq\" (UniqueName: \"kubernetes.io/projected/a405aa98-d9c4-4ee1-90bb-da0cc8e09301-kube-api-access-jdpcq\") pod \"a405aa98-d9c4-4ee1-90bb-da0cc8e09301\" (UID: \"a405aa98-d9c4-4ee1-90bb-da0cc8e09301\") " Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.618686 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnqb8\" (UniqueName: \"kubernetes.io/projected/6b1f7066-97d5-4fbf-915e-06d9ed522000-kube-api-access-fnqb8\") pod \"6b1f7066-97d5-4fbf-915e-06d9ed522000\" (UID: \"6b1f7066-97d5-4fbf-915e-06d9ed522000\") " Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.618708 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b1f7066-97d5-4fbf-915e-06d9ed522000-operator-scripts\") pod \"6b1f7066-97d5-4fbf-915e-06d9ed522000\" (UID: \"6b1f7066-97d5-4fbf-915e-06d9ed522000\") " Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.620164 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a405aa98-d9c4-4ee1-90bb-da0cc8e09301-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a405aa98-d9c4-4ee1-90bb-da0cc8e09301" (UID: "a405aa98-d9c4-4ee1-90bb-da0cc8e09301"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.620220 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b1f7066-97d5-4fbf-915e-06d9ed522000-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6b1f7066-97d5-4fbf-915e-06d9ed522000" (UID: "6b1f7066-97d5-4fbf-915e-06d9ed522000"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.627431 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b1f7066-97d5-4fbf-915e-06d9ed522000-kube-api-access-fnqb8" (OuterVolumeSpecName: "kube-api-access-fnqb8") pod "6b1f7066-97d5-4fbf-915e-06d9ed522000" (UID: "6b1f7066-97d5-4fbf-915e-06d9ed522000"). InnerVolumeSpecName "kube-api-access-fnqb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.645158 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a405aa98-d9c4-4ee1-90bb-da0cc8e09301-kube-api-access-jdpcq" (OuterVolumeSpecName: "kube-api-access-jdpcq") pod "a405aa98-d9c4-4ee1-90bb-da0cc8e09301" (UID: "a405aa98-d9c4-4ee1-90bb-da0cc8e09301"). InnerVolumeSpecName "kube-api-access-jdpcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.713244 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-916e-account-create-update-ps447" event={"ID":"a405aa98-d9c4-4ee1-90bb-da0cc8e09301","Type":"ContainerDied","Data":"fea137e37a4031ef95aa776d0037ab33440dad97e198c752f05e294578a27e1d"} Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.713559 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fea137e37a4031ef95aa776d0037ab33440dad97e198c752f05e294578a27e1d" Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.713690 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-916e-account-create-update-ps447" Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.721069 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nl88\" (UniqueName: \"kubernetes.io/projected/cce7e8e8-08b1-41ef-b1d4-efaba433bec0-kube-api-access-4nl88\") pod \"cce7e8e8-08b1-41ef-b1d4-efaba433bec0\" (UID: \"cce7e8e8-08b1-41ef-b1d4-efaba433bec0\") " Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.721216 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cce7e8e8-08b1-41ef-b1d4-efaba433bec0-operator-scripts\") pod \"cce7e8e8-08b1-41ef-b1d4-efaba433bec0\" (UID: \"cce7e8e8-08b1-41ef-b1d4-efaba433bec0\") " Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.722141 4667 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a405aa98-d9c4-4ee1-90bb-da0cc8e09301-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.722185 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdpcq\" (UniqueName: \"kubernetes.io/projected/a405aa98-d9c4-4ee1-90bb-da0cc8e09301-kube-api-access-jdpcq\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.722200 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnqb8\" (UniqueName: \"kubernetes.io/projected/6b1f7066-97d5-4fbf-915e-06d9ed522000-kube-api-access-fnqb8\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.722210 4667 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b1f7066-97d5-4fbf-915e-06d9ed522000-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.724789 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-8q9zm" event={"ID":"cce7e8e8-08b1-41ef-b1d4-efaba433bec0","Type":"ContainerDied","Data":"7c452be56a2a7bb4d5dd8790de0430657f5cdc8c08de6c501765685026173dcb"} Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.724831 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c452be56a2a7bb4d5dd8790de0430657f5cdc8c08de6c501765685026173dcb" Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.724934 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-8q9zm" Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.727814 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cce7e8e8-08b1-41ef-b1d4-efaba433bec0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cce7e8e8-08b1-41ef-b1d4-efaba433bec0" (UID: "cce7e8e8-08b1-41ef-b1d4-efaba433bec0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.734197 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cce7e8e8-08b1-41ef-b1d4-efaba433bec0-kube-api-access-4nl88" (OuterVolumeSpecName: "kube-api-access-4nl88") pod "cce7e8e8-08b1-41ef-b1d4-efaba433bec0" (UID: "cce7e8e8-08b1-41ef-b1d4-efaba433bec0"). InnerVolumeSpecName "kube-api-access-4nl88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.738337 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-32cf-account-create-update-hsfjj" event={"ID":"be0254c4-04b0-44bb-96dd-69a9538a9f9e","Type":"ContainerDied","Data":"cf804acdd10ed721d7f1615835750663b2ef970dcf2bc157e639b99ca5b9d74a"} Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.738382 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf804acdd10ed721d7f1615835750663b2ef970dcf2bc157e639b99ca5b9d74a" Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.738445 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-32cf-account-create-update-hsfjj" Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.782750 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bnv28" event={"ID":"6b1f7066-97d5-4fbf-915e-06d9ed522000","Type":"ContainerDied","Data":"e1cec544a3903eeb7247f847f60fabb0422afcc89f90ea87d0433f6f247e5fc4"} Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.782804 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1cec544a3903eeb7247f847f60fabb0422afcc89f90ea87d0433f6f247e5fc4" Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.782908 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bnv28" Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.824544 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nl88\" (UniqueName: \"kubernetes.io/projected/cce7e8e8-08b1-41ef-b1d4-efaba433bec0-kube-api-access-4nl88\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.824579 4667 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cce7e8e8-08b1-41ef-b1d4-efaba433bec0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.882475 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-5055-account-create-update-2rcb7" Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.892466 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-x9gc7" Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.926776 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/492ab7ad-86ee-4c40-a924-bd5cd948a4dd-operator-scripts\") pod \"492ab7ad-86ee-4c40-a924-bd5cd948a4dd\" (UID: \"492ab7ad-86ee-4c40-a924-bd5cd948a4dd\") " Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.927314 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8x86\" (UniqueName: \"kubernetes.io/projected/492ab7ad-86ee-4c40-a924-bd5cd948a4dd-kube-api-access-k8x86\") pod \"492ab7ad-86ee-4c40-a924-bd5cd948a4dd\" (UID: \"492ab7ad-86ee-4c40-a924-bd5cd948a4dd\") " Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.928826 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/492ab7ad-86ee-4c40-a924-bd5cd948a4dd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "492ab7ad-86ee-4c40-a924-bd5cd948a4dd" (UID: "492ab7ad-86ee-4c40-a924-bd5cd948a4dd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:09:19 crc kubenswrapper[4667]: I0131 04:09:19.941488 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/492ab7ad-86ee-4c40-a924-bd5cd948a4dd-kube-api-access-k8x86" (OuterVolumeSpecName: "kube-api-access-k8x86") pod "492ab7ad-86ee-4c40-a924-bd5cd948a4dd" (UID: "492ab7ad-86ee-4c40-a924-bd5cd948a4dd"). InnerVolumeSpecName "kube-api-access-k8x86". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:09:20 crc kubenswrapper[4667]: I0131 04:09:20.031859 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73be4b20-cf7a-430b-995f-07f3475b064c-operator-scripts\") pod \"73be4b20-cf7a-430b-995f-07f3475b064c\" (UID: \"73be4b20-cf7a-430b-995f-07f3475b064c\") " Jan 31 04:09:20 crc kubenswrapper[4667]: I0131 04:09:20.032329 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8vr9\" (UniqueName: \"kubernetes.io/projected/73be4b20-cf7a-430b-995f-07f3475b064c-kube-api-access-h8vr9\") pod \"73be4b20-cf7a-430b-995f-07f3475b064c\" (UID: \"73be4b20-cf7a-430b-995f-07f3475b064c\") " Jan 31 04:09:20 crc kubenswrapper[4667]: I0131 04:09:20.032999 4667 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/492ab7ad-86ee-4c40-a924-bd5cd948a4dd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:20 crc kubenswrapper[4667]: I0131 04:09:20.033026 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8x86\" (UniqueName: \"kubernetes.io/projected/492ab7ad-86ee-4c40-a924-bd5cd948a4dd-kube-api-access-k8x86\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:20 crc kubenswrapper[4667]: I0131 04:09:20.034347 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73be4b20-cf7a-430b-995f-07f3475b064c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "73be4b20-cf7a-430b-995f-07f3475b064c" (UID: "73be4b20-cf7a-430b-995f-07f3475b064c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:09:20 crc kubenswrapper[4667]: I0131 04:09:20.042105 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73be4b20-cf7a-430b-995f-07f3475b064c-kube-api-access-h8vr9" (OuterVolumeSpecName: "kube-api-access-h8vr9") pod "73be4b20-cf7a-430b-995f-07f3475b064c" (UID: "73be4b20-cf7a-430b-995f-07f3475b064c"). InnerVolumeSpecName "kube-api-access-h8vr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:09:20 crc kubenswrapper[4667]: I0131 04:09:20.134887 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8vr9\" (UniqueName: \"kubernetes.io/projected/73be4b20-cf7a-430b-995f-07f3475b064c-kube-api-access-h8vr9\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:20 crc kubenswrapper[4667]: I0131 04:09:20.134926 4667 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73be4b20-cf7a-430b-995f-07f3475b064c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:20 crc kubenswrapper[4667]: I0131 04:09:20.797320 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-x9gc7" event={"ID":"73be4b20-cf7a-430b-995f-07f3475b064c","Type":"ContainerDied","Data":"6bcd5598f01c56a3e01c88ddedf5155894681bac11ade9516d7873a37bef5d7a"} Jan 31 04:09:20 crc kubenswrapper[4667]: I0131 04:09:20.797376 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bcd5598f01c56a3e01c88ddedf5155894681bac11ade9516d7873a37bef5d7a" Jan 31 04:09:20 crc kubenswrapper[4667]: I0131 04:09:20.797410 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-x9gc7" Jan 31 04:09:20 crc kubenswrapper[4667]: I0131 04:09:20.801802 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-5055-account-create-update-2rcb7" event={"ID":"492ab7ad-86ee-4c40-a924-bd5cd948a4dd","Type":"ContainerDied","Data":"5913d9ae6134c9a572703d8be7517e68f606dca49fcecd9e957e42779ffcf4e2"} Jan 31 04:09:20 crc kubenswrapper[4667]: I0131 04:09:20.801837 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5913d9ae6134c9a572703d8be7517e68f606dca49fcecd9e957e42779ffcf4e2" Jan 31 04:09:20 crc kubenswrapper[4667]: I0131 04:09:20.801894 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-5055-account-create-update-2rcb7" Jan 31 04:09:24 crc kubenswrapper[4667]: I0131 04:09:24.977934 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bhwqz"] Jan 31 04:09:24 crc kubenswrapper[4667]: E0131 04:09:24.979130 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b1f7066-97d5-4fbf-915e-06d9ed522000" containerName="mariadb-database-create" Jan 31 04:09:24 crc kubenswrapper[4667]: I0131 04:09:24.979146 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b1f7066-97d5-4fbf-915e-06d9ed522000" containerName="mariadb-database-create" Jan 31 04:09:24 crc kubenswrapper[4667]: E0131 04:09:24.979156 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be0254c4-04b0-44bb-96dd-69a9538a9f9e" containerName="mariadb-account-create-update" Jan 31 04:09:24 crc kubenswrapper[4667]: I0131 04:09:24.979162 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="be0254c4-04b0-44bb-96dd-69a9538a9f9e" containerName="mariadb-account-create-update" Jan 31 04:09:24 crc kubenswrapper[4667]: E0131 04:09:24.979171 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a405aa98-d9c4-4ee1-90bb-da0cc8e09301" containerName="mariadb-account-create-update" Jan 31 04:09:24 crc kubenswrapper[4667]: I0131 04:09:24.979179 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="a405aa98-d9c4-4ee1-90bb-da0cc8e09301" containerName="mariadb-account-create-update" Jan 31 04:09:24 crc kubenswrapper[4667]: E0131 04:09:24.979195 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="492ab7ad-86ee-4c40-a924-bd5cd948a4dd" containerName="mariadb-account-create-update" Jan 31 04:09:24 crc kubenswrapper[4667]: I0131 04:09:24.979201 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="492ab7ad-86ee-4c40-a924-bd5cd948a4dd" containerName="mariadb-account-create-update" Jan 31 04:09:24 crc kubenswrapper[4667]: E0131 04:09:24.979211 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cce7e8e8-08b1-41ef-b1d4-efaba433bec0" containerName="mariadb-database-create" Jan 31 04:09:24 crc kubenswrapper[4667]: I0131 04:09:24.979217 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="cce7e8e8-08b1-41ef-b1d4-efaba433bec0" containerName="mariadb-database-create" Jan 31 04:09:24 crc kubenswrapper[4667]: E0131 04:09:24.979233 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73be4b20-cf7a-430b-995f-07f3475b064c" containerName="mariadb-database-create" Jan 31 04:09:24 crc kubenswrapper[4667]: I0131 04:09:24.979240 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="73be4b20-cf7a-430b-995f-07f3475b064c" containerName="mariadb-database-create" Jan 31 04:09:24 crc kubenswrapper[4667]: I0131 04:09:24.979417 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="73be4b20-cf7a-430b-995f-07f3475b064c" containerName="mariadb-database-create" Jan 31 04:09:24 crc kubenswrapper[4667]: I0131 04:09:24.979429 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="492ab7ad-86ee-4c40-a924-bd5cd948a4dd" containerName="mariadb-account-create-update" Jan 31 04:09:24 crc kubenswrapper[4667]: I0131 04:09:24.979468 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="a405aa98-d9c4-4ee1-90bb-da0cc8e09301" containerName="mariadb-account-create-update" Jan 31 04:09:24 crc kubenswrapper[4667]: I0131 04:09:24.979479 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="cce7e8e8-08b1-41ef-b1d4-efaba433bec0" containerName="mariadb-database-create" Jan 31 04:09:24 crc kubenswrapper[4667]: I0131 04:09:24.979487 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="be0254c4-04b0-44bb-96dd-69a9538a9f9e" containerName="mariadb-account-create-update" Jan 31 04:09:24 crc kubenswrapper[4667]: I0131 04:09:24.979495 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b1f7066-97d5-4fbf-915e-06d9ed522000" containerName="mariadb-database-create" Jan 31 04:09:24 crc kubenswrapper[4667]: I0131 04:09:24.980209 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bhwqz" Jan 31 04:09:24 crc kubenswrapper[4667]: I0131 04:09:24.995649 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-55gtc" Jan 31 04:09:24 crc kubenswrapper[4667]: I0131 04:09:24.995817 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 31 04:09:24 crc kubenswrapper[4667]: I0131 04:09:24.996101 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 31 04:09:25 crc kubenswrapper[4667]: I0131 04:09:25.056927 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bhwqz"] Jan 31 04:09:25 crc kubenswrapper[4667]: I0131 04:09:25.065887 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78e0b2a7-8c04-43a3-86b7-d2406c2125c7-scripts\") pod \"nova-cell0-conductor-db-sync-bhwqz\" (UID: \"78e0b2a7-8c04-43a3-86b7-d2406c2125c7\") " pod="openstack/nova-cell0-conductor-db-sync-bhwqz" Jan 31 04:09:25 crc kubenswrapper[4667]: I0131 04:09:25.065948 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgnwn\" (UniqueName: \"kubernetes.io/projected/78e0b2a7-8c04-43a3-86b7-d2406c2125c7-kube-api-access-qgnwn\") pod \"nova-cell0-conductor-db-sync-bhwqz\" (UID: \"78e0b2a7-8c04-43a3-86b7-d2406c2125c7\") " pod="openstack/nova-cell0-conductor-db-sync-bhwqz" Jan 31 04:09:25 crc kubenswrapper[4667]: I0131 04:09:25.065998 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78e0b2a7-8c04-43a3-86b7-d2406c2125c7-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bhwqz\" (UID: \"78e0b2a7-8c04-43a3-86b7-d2406c2125c7\") " pod="openstack/nova-cell0-conductor-db-sync-bhwqz" Jan 31 04:09:25 crc kubenswrapper[4667]: I0131 04:09:25.066071 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78e0b2a7-8c04-43a3-86b7-d2406c2125c7-config-data\") pod \"nova-cell0-conductor-db-sync-bhwqz\" (UID: \"78e0b2a7-8c04-43a3-86b7-d2406c2125c7\") " pod="openstack/nova-cell0-conductor-db-sync-bhwqz" Jan 31 04:09:25 crc kubenswrapper[4667]: I0131 04:09:25.167862 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78e0b2a7-8c04-43a3-86b7-d2406c2125c7-scripts\") pod \"nova-cell0-conductor-db-sync-bhwqz\" (UID: \"78e0b2a7-8c04-43a3-86b7-d2406c2125c7\") " pod="openstack/nova-cell0-conductor-db-sync-bhwqz" Jan 31 04:09:25 crc kubenswrapper[4667]: I0131 04:09:25.167939 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgnwn\" (UniqueName: \"kubernetes.io/projected/78e0b2a7-8c04-43a3-86b7-d2406c2125c7-kube-api-access-qgnwn\") pod \"nova-cell0-conductor-db-sync-bhwqz\" (UID: \"78e0b2a7-8c04-43a3-86b7-d2406c2125c7\") " pod="openstack/nova-cell0-conductor-db-sync-bhwqz" Jan 31 04:09:25 crc kubenswrapper[4667]: I0131 04:09:25.169094 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78e0b2a7-8c04-43a3-86b7-d2406c2125c7-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bhwqz\" (UID: \"78e0b2a7-8c04-43a3-86b7-d2406c2125c7\") " pod="openstack/nova-cell0-conductor-db-sync-bhwqz" Jan 31 04:09:25 crc kubenswrapper[4667]: I0131 04:09:25.169505 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78e0b2a7-8c04-43a3-86b7-d2406c2125c7-config-data\") pod \"nova-cell0-conductor-db-sync-bhwqz\" (UID: \"78e0b2a7-8c04-43a3-86b7-d2406c2125c7\") " pod="openstack/nova-cell0-conductor-db-sync-bhwqz" Jan 31 04:09:25 crc kubenswrapper[4667]: I0131 04:09:25.177061 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78e0b2a7-8c04-43a3-86b7-d2406c2125c7-scripts\") pod \"nova-cell0-conductor-db-sync-bhwqz\" (UID: \"78e0b2a7-8c04-43a3-86b7-d2406c2125c7\") " pod="openstack/nova-cell0-conductor-db-sync-bhwqz" Jan 31 04:09:25 crc kubenswrapper[4667]: I0131 04:09:25.184709 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78e0b2a7-8c04-43a3-86b7-d2406c2125c7-config-data\") pod \"nova-cell0-conductor-db-sync-bhwqz\" (UID: \"78e0b2a7-8c04-43a3-86b7-d2406c2125c7\") " pod="openstack/nova-cell0-conductor-db-sync-bhwqz" Jan 31 04:09:25 crc kubenswrapper[4667]: I0131 04:09:25.194597 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgnwn\" (UniqueName: \"kubernetes.io/projected/78e0b2a7-8c04-43a3-86b7-d2406c2125c7-kube-api-access-qgnwn\") pod \"nova-cell0-conductor-db-sync-bhwqz\" (UID: \"78e0b2a7-8c04-43a3-86b7-d2406c2125c7\") " pod="openstack/nova-cell0-conductor-db-sync-bhwqz" Jan 31 04:09:25 crc kubenswrapper[4667]: I0131 04:09:25.200948 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78e0b2a7-8c04-43a3-86b7-d2406c2125c7-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bhwqz\" (UID: \"78e0b2a7-8c04-43a3-86b7-d2406c2125c7\") " pod="openstack/nova-cell0-conductor-db-sync-bhwqz" Jan 31 04:09:25 crc kubenswrapper[4667]: I0131 04:09:25.318863 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bhwqz" Jan 31 04:09:25 crc kubenswrapper[4667]: I0131 04:09:25.918888 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bhwqz"] Jan 31 04:09:25 crc kubenswrapper[4667]: I0131 04:09:25.933589 4667 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 04:09:26 crc kubenswrapper[4667]: I0131 04:09:26.885026 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bhwqz" event={"ID":"78e0b2a7-8c04-43a3-86b7-d2406c2125c7","Type":"ContainerStarted","Data":"b7cc3874c9c85777f70d54f419185bb2c318727925e4fe21490d1e4460da5aa7"} Jan 31 04:09:33 crc kubenswrapper[4667]: I0131 04:09:33.961710 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 31 04:09:36 crc kubenswrapper[4667]: I0131 04:09:36.019903 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bhwqz" event={"ID":"78e0b2a7-8c04-43a3-86b7-d2406c2125c7","Type":"ContainerStarted","Data":"e5369b5d2bf898d072daf88b8a2a3ad5ebb1d9b7470b0e202b2584d71f186765"} Jan 31 04:09:36 crc kubenswrapper[4667]: I0131 04:09:36.046023 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-bhwqz" podStartSLOduration=2.844222787 podStartE2EDuration="12.045996982s" podCreationTimestamp="2026-01-31 04:09:24 +0000 UTC" firstStartedPulling="2026-01-31 04:09:25.933356238 +0000 UTC m=+1289.449691537" lastFinishedPulling="2026-01-31 04:09:35.135130433 +0000 UTC m=+1298.651465732" observedRunningTime="2026-01-31 04:09:36.042648293 +0000 UTC m=+1299.558983622" watchObservedRunningTime="2026-01-31 04:09:36.045996982 +0000 UTC m=+1299.562332291" Jan 31 04:09:36 crc kubenswrapper[4667]: I0131 04:09:36.246736 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:09:36 crc kubenswrapper[4667]: I0131 04:09:36.247116 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ae80f16a-2fb7-4994-b184-27c1e5fe52b6" containerName="ceilometer-central-agent" containerID="cri-o://9b61a14797d90327329ffd0026584c2dbcf88a931f511747e78aed327e3186a8" gracePeriod=30 Jan 31 04:09:36 crc kubenswrapper[4667]: I0131 04:09:36.247299 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ae80f16a-2fb7-4994-b184-27c1e5fe52b6" containerName="sg-core" containerID="cri-o://7bb0466bd935eeeb0b3bca71b7fc0eb44ebb3b09804b8e24692d376c38feba87" gracePeriod=30 Jan 31 04:09:36 crc kubenswrapper[4667]: I0131 04:09:36.247451 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ae80f16a-2fb7-4994-b184-27c1e5fe52b6" containerName="ceilometer-notification-agent" containerID="cri-o://50d6382f5e324e87e926b7fa6c6be8ee503105ace1dd34b5dcb206c154cd6686" gracePeriod=30 Jan 31 04:09:36 crc kubenswrapper[4667]: I0131 04:09:36.248132 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ae80f16a-2fb7-4994-b184-27c1e5fe52b6" containerName="proxy-httpd" containerID="cri-o://28b4941c66dccf2a7d7b22f3be35dbaa54d16e2f2153dfa5dee5f1691a423311" gracePeriod=30 Jan 31 04:09:37 crc kubenswrapper[4667]: I0131 04:09:37.041755 4667 generic.go:334] "Generic (PLEG): container finished" podID="ae80f16a-2fb7-4994-b184-27c1e5fe52b6" containerID="28b4941c66dccf2a7d7b22f3be35dbaa54d16e2f2153dfa5dee5f1691a423311" exitCode=0 Jan 31 04:09:37 crc kubenswrapper[4667]: I0131 04:09:37.041804 4667 generic.go:334] "Generic (PLEG): container finished" podID="ae80f16a-2fb7-4994-b184-27c1e5fe52b6" containerID="7bb0466bd935eeeb0b3bca71b7fc0eb44ebb3b09804b8e24692d376c38feba87" exitCode=2 Jan 31 04:09:37 crc kubenswrapper[4667]: I0131 04:09:37.041815 4667 generic.go:334] "Generic (PLEG): container finished" podID="ae80f16a-2fb7-4994-b184-27c1e5fe52b6" containerID="9b61a14797d90327329ffd0026584c2dbcf88a931f511747e78aed327e3186a8" exitCode=0 Jan 31 04:09:37 crc kubenswrapper[4667]: I0131 04:09:37.043807 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae80f16a-2fb7-4994-b184-27c1e5fe52b6","Type":"ContainerDied","Data":"28b4941c66dccf2a7d7b22f3be35dbaa54d16e2f2153dfa5dee5f1691a423311"} Jan 31 04:09:37 crc kubenswrapper[4667]: I0131 04:09:37.043868 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae80f16a-2fb7-4994-b184-27c1e5fe52b6","Type":"ContainerDied","Data":"7bb0466bd935eeeb0b3bca71b7fc0eb44ebb3b09804b8e24692d376c38feba87"} Jan 31 04:09:37 crc kubenswrapper[4667]: I0131 04:09:37.043883 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae80f16a-2fb7-4994-b184-27c1e5fe52b6","Type":"ContainerDied","Data":"9b61a14797d90327329ffd0026584c2dbcf88a931f511747e78aed327e3186a8"} Jan 31 04:09:39 crc kubenswrapper[4667]: I0131 04:09:39.557735 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:09:39 crc kubenswrapper[4667]: I0131 04:09:39.697470 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae80f16a-2fb7-4994-b184-27c1e5fe52b6-combined-ca-bundle\") pod \"ae80f16a-2fb7-4994-b184-27c1e5fe52b6\" (UID: \"ae80f16a-2fb7-4994-b184-27c1e5fe52b6\") " Jan 31 04:09:39 crc kubenswrapper[4667]: I0131 04:09:39.697600 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghg84\" (UniqueName: \"kubernetes.io/projected/ae80f16a-2fb7-4994-b184-27c1e5fe52b6-kube-api-access-ghg84\") pod \"ae80f16a-2fb7-4994-b184-27c1e5fe52b6\" (UID: \"ae80f16a-2fb7-4994-b184-27c1e5fe52b6\") " Jan 31 04:09:39 crc kubenswrapper[4667]: I0131 04:09:39.697681 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae80f16a-2fb7-4994-b184-27c1e5fe52b6-log-httpd\") pod \"ae80f16a-2fb7-4994-b184-27c1e5fe52b6\" (UID: \"ae80f16a-2fb7-4994-b184-27c1e5fe52b6\") " Jan 31 04:09:39 crc kubenswrapper[4667]: I0131 04:09:39.697748 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ae80f16a-2fb7-4994-b184-27c1e5fe52b6-sg-core-conf-yaml\") pod \"ae80f16a-2fb7-4994-b184-27c1e5fe52b6\" (UID: \"ae80f16a-2fb7-4994-b184-27c1e5fe52b6\") " Jan 31 04:09:39 crc kubenswrapper[4667]: I0131 04:09:39.697803 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae80f16a-2fb7-4994-b184-27c1e5fe52b6-config-data\") pod \"ae80f16a-2fb7-4994-b184-27c1e5fe52b6\" (UID: \"ae80f16a-2fb7-4994-b184-27c1e5fe52b6\") " Jan 31 04:09:39 crc kubenswrapper[4667]: I0131 04:09:39.697906 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae80f16a-2fb7-4994-b184-27c1e5fe52b6-run-httpd\") pod \"ae80f16a-2fb7-4994-b184-27c1e5fe52b6\" (UID: \"ae80f16a-2fb7-4994-b184-27c1e5fe52b6\") " Jan 31 04:09:39 crc kubenswrapper[4667]: I0131 04:09:39.697959 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae80f16a-2fb7-4994-b184-27c1e5fe52b6-scripts\") pod \"ae80f16a-2fb7-4994-b184-27c1e5fe52b6\" (UID: \"ae80f16a-2fb7-4994-b184-27c1e5fe52b6\") " Jan 31 04:09:39 crc kubenswrapper[4667]: I0131 04:09:39.699332 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae80f16a-2fb7-4994-b184-27c1e5fe52b6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ae80f16a-2fb7-4994-b184-27c1e5fe52b6" (UID: "ae80f16a-2fb7-4994-b184-27c1e5fe52b6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:09:39 crc kubenswrapper[4667]: I0131 04:09:39.699574 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae80f16a-2fb7-4994-b184-27c1e5fe52b6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ae80f16a-2fb7-4994-b184-27c1e5fe52b6" (UID: "ae80f16a-2fb7-4994-b184-27c1e5fe52b6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:09:39 crc kubenswrapper[4667]: I0131 04:09:39.707130 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae80f16a-2fb7-4994-b184-27c1e5fe52b6-scripts" (OuterVolumeSpecName: "scripts") pod "ae80f16a-2fb7-4994-b184-27c1e5fe52b6" (UID: "ae80f16a-2fb7-4994-b184-27c1e5fe52b6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:09:39 crc kubenswrapper[4667]: I0131 04:09:39.707372 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae80f16a-2fb7-4994-b184-27c1e5fe52b6-kube-api-access-ghg84" (OuterVolumeSpecName: "kube-api-access-ghg84") pod "ae80f16a-2fb7-4994-b184-27c1e5fe52b6" (UID: "ae80f16a-2fb7-4994-b184-27c1e5fe52b6"). InnerVolumeSpecName "kube-api-access-ghg84". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:09:39 crc kubenswrapper[4667]: I0131 04:09:39.735940 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae80f16a-2fb7-4994-b184-27c1e5fe52b6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ae80f16a-2fb7-4994-b184-27c1e5fe52b6" (UID: "ae80f16a-2fb7-4994-b184-27c1e5fe52b6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:09:39 crc kubenswrapper[4667]: I0131 04:09:39.801111 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghg84\" (UniqueName: \"kubernetes.io/projected/ae80f16a-2fb7-4994-b184-27c1e5fe52b6-kube-api-access-ghg84\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:39 crc kubenswrapper[4667]: I0131 04:09:39.801146 4667 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae80f16a-2fb7-4994-b184-27c1e5fe52b6-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:39 crc kubenswrapper[4667]: I0131 04:09:39.801158 4667 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ae80f16a-2fb7-4994-b184-27c1e5fe52b6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:39 crc kubenswrapper[4667]: I0131 04:09:39.801168 4667 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae80f16a-2fb7-4994-b184-27c1e5fe52b6-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:39 crc kubenswrapper[4667]: I0131 04:09:39.801178 4667 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae80f16a-2fb7-4994-b184-27c1e5fe52b6-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:39 crc kubenswrapper[4667]: I0131 04:09:39.815731 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae80f16a-2fb7-4994-b184-27c1e5fe52b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae80f16a-2fb7-4994-b184-27c1e5fe52b6" (UID: "ae80f16a-2fb7-4994-b184-27c1e5fe52b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:09:39 crc kubenswrapper[4667]: I0131 04:09:39.851260 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae80f16a-2fb7-4994-b184-27c1e5fe52b6-config-data" (OuterVolumeSpecName: "config-data") pod "ae80f16a-2fb7-4994-b184-27c1e5fe52b6" (UID: "ae80f16a-2fb7-4994-b184-27c1e5fe52b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:09:39 crc kubenswrapper[4667]: I0131 04:09:39.903628 4667 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae80f16a-2fb7-4994-b184-27c1e5fe52b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:39 crc kubenswrapper[4667]: I0131 04:09:39.903670 4667 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae80f16a-2fb7-4994-b184-27c1e5fe52b6-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.078105 4667 generic.go:334] "Generic (PLEG): container finished" podID="ae80f16a-2fb7-4994-b184-27c1e5fe52b6" containerID="50d6382f5e324e87e926b7fa6c6be8ee503105ace1dd34b5dcb206c154cd6686" exitCode=0 Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.078702 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae80f16a-2fb7-4994-b184-27c1e5fe52b6","Type":"ContainerDied","Data":"50d6382f5e324e87e926b7fa6c6be8ee503105ace1dd34b5dcb206c154cd6686"} Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.079196 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae80f16a-2fb7-4994-b184-27c1e5fe52b6","Type":"ContainerDied","Data":"10efd47238628ff8b941fd0cbbf6bdba9c67d4b0c324fd760ea664df8fad00cf"} Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.079352 4667 scope.go:117] "RemoveContainer" containerID="28b4941c66dccf2a7d7b22f3be35dbaa54d16e2f2153dfa5dee5f1691a423311" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.078812 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.105447 4667 scope.go:117] "RemoveContainer" containerID="7bb0466bd935eeeb0b3bca71b7fc0eb44ebb3b09804b8e24692d376c38feba87" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.177833 4667 scope.go:117] "RemoveContainer" containerID="50d6382f5e324e87e926b7fa6c6be8ee503105ace1dd34b5dcb206c154cd6686" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.189016 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.200088 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.203772 4667 scope.go:117] "RemoveContainer" containerID="9b61a14797d90327329ffd0026584c2dbcf88a931f511747e78aed327e3186a8" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.231017 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:09:40 crc kubenswrapper[4667]: E0131 04:09:40.231524 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae80f16a-2fb7-4994-b184-27c1e5fe52b6" containerName="ceilometer-notification-agent" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.231541 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae80f16a-2fb7-4994-b184-27c1e5fe52b6" containerName="ceilometer-notification-agent" Jan 31 04:09:40 crc kubenswrapper[4667]: E0131 04:09:40.231567 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae80f16a-2fb7-4994-b184-27c1e5fe52b6" containerName="sg-core" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.231574 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae80f16a-2fb7-4994-b184-27c1e5fe52b6" containerName="sg-core" Jan 31 04:09:40 crc kubenswrapper[4667]: E0131 04:09:40.231583 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae80f16a-2fb7-4994-b184-27c1e5fe52b6" containerName="ceilometer-central-agent" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.231588 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae80f16a-2fb7-4994-b184-27c1e5fe52b6" containerName="ceilometer-central-agent" Jan 31 04:09:40 crc kubenswrapper[4667]: E0131 04:09:40.231610 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae80f16a-2fb7-4994-b184-27c1e5fe52b6" containerName="proxy-httpd" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.231615 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae80f16a-2fb7-4994-b184-27c1e5fe52b6" containerName="proxy-httpd" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.231808 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae80f16a-2fb7-4994-b184-27c1e5fe52b6" containerName="ceilometer-notification-agent" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.231818 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae80f16a-2fb7-4994-b184-27c1e5fe52b6" containerName="sg-core" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.231828 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae80f16a-2fb7-4994-b184-27c1e5fe52b6" containerName="proxy-httpd" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.231864 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae80f16a-2fb7-4994-b184-27c1e5fe52b6" containerName="ceilometer-central-agent" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.232342 4667 scope.go:117] "RemoveContainer" containerID="28b4941c66dccf2a7d7b22f3be35dbaa54d16e2f2153dfa5dee5f1691a423311" Jan 31 04:09:40 crc kubenswrapper[4667]: E0131 04:09:40.236479 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28b4941c66dccf2a7d7b22f3be35dbaa54d16e2f2153dfa5dee5f1691a423311\": container with ID starting with 28b4941c66dccf2a7d7b22f3be35dbaa54d16e2f2153dfa5dee5f1691a423311 not found: ID does not exist" containerID="28b4941c66dccf2a7d7b22f3be35dbaa54d16e2f2153dfa5dee5f1691a423311" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.236548 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28b4941c66dccf2a7d7b22f3be35dbaa54d16e2f2153dfa5dee5f1691a423311"} err="failed to get container status \"28b4941c66dccf2a7d7b22f3be35dbaa54d16e2f2153dfa5dee5f1691a423311\": rpc error: code = NotFound desc = could not find container \"28b4941c66dccf2a7d7b22f3be35dbaa54d16e2f2153dfa5dee5f1691a423311\": container with ID starting with 28b4941c66dccf2a7d7b22f3be35dbaa54d16e2f2153dfa5dee5f1691a423311 not found: ID does not exist" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.236586 4667 scope.go:117] "RemoveContainer" containerID="7bb0466bd935eeeb0b3bca71b7fc0eb44ebb3b09804b8e24692d376c38feba87" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.237801 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:09:40 crc kubenswrapper[4667]: E0131 04:09:40.239290 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bb0466bd935eeeb0b3bca71b7fc0eb44ebb3b09804b8e24692d376c38feba87\": container with ID starting with 7bb0466bd935eeeb0b3bca71b7fc0eb44ebb3b09804b8e24692d376c38feba87 not found: ID does not exist" containerID="7bb0466bd935eeeb0b3bca71b7fc0eb44ebb3b09804b8e24692d376c38feba87" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.239322 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bb0466bd935eeeb0b3bca71b7fc0eb44ebb3b09804b8e24692d376c38feba87"} err="failed to get container status \"7bb0466bd935eeeb0b3bca71b7fc0eb44ebb3b09804b8e24692d376c38feba87\": rpc error: code = NotFound desc = could not find container \"7bb0466bd935eeeb0b3bca71b7fc0eb44ebb3b09804b8e24692d376c38feba87\": container with ID starting with 7bb0466bd935eeeb0b3bca71b7fc0eb44ebb3b09804b8e24692d376c38feba87 not found: ID does not exist" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.239348 4667 scope.go:117] "RemoveContainer" containerID="50d6382f5e324e87e926b7fa6c6be8ee503105ace1dd34b5dcb206c154cd6686" Jan 31 04:09:40 crc kubenswrapper[4667]: E0131 04:09:40.239941 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50d6382f5e324e87e926b7fa6c6be8ee503105ace1dd34b5dcb206c154cd6686\": container with ID starting with 50d6382f5e324e87e926b7fa6c6be8ee503105ace1dd34b5dcb206c154cd6686 not found: ID does not exist" containerID="50d6382f5e324e87e926b7fa6c6be8ee503105ace1dd34b5dcb206c154cd6686" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.239984 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50d6382f5e324e87e926b7fa6c6be8ee503105ace1dd34b5dcb206c154cd6686"} err="failed to get container status \"50d6382f5e324e87e926b7fa6c6be8ee503105ace1dd34b5dcb206c154cd6686\": rpc error: code = NotFound desc = could not find container \"50d6382f5e324e87e926b7fa6c6be8ee503105ace1dd34b5dcb206c154cd6686\": container with ID starting with 50d6382f5e324e87e926b7fa6c6be8ee503105ace1dd34b5dcb206c154cd6686 not found: ID does not exist" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.240004 4667 scope.go:117] "RemoveContainer" containerID="9b61a14797d90327329ffd0026584c2dbcf88a931f511747e78aed327e3186a8" Jan 31 04:09:40 crc kubenswrapper[4667]: E0131 04:09:40.240285 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b61a14797d90327329ffd0026584c2dbcf88a931f511747e78aed327e3186a8\": container with ID starting with 9b61a14797d90327329ffd0026584c2dbcf88a931f511747e78aed327e3186a8 not found: ID does not exist" containerID="9b61a14797d90327329ffd0026584c2dbcf88a931f511747e78aed327e3186a8" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.240309 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b61a14797d90327329ffd0026584c2dbcf88a931f511747e78aed327e3186a8"} err="failed to get container status \"9b61a14797d90327329ffd0026584c2dbcf88a931f511747e78aed327e3186a8\": rpc error: code = NotFound desc = could not find container \"9b61a14797d90327329ffd0026584c2dbcf88a931f511747e78aed327e3186a8\": container with ID starting with 9b61a14797d90327329ffd0026584c2dbcf88a931f511747e78aed327e3186a8 not found: ID does not exist" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.245452 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.245732 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.257632 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.418227 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b27517d-03d7-421b-9875-86ed13c59563-scripts\") pod \"ceilometer-0\" (UID: \"9b27517d-03d7-421b-9875-86ed13c59563\") " pod="openstack/ceilometer-0" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.418308 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbb2t\" (UniqueName: \"kubernetes.io/projected/9b27517d-03d7-421b-9875-86ed13c59563-kube-api-access-xbb2t\") pod \"ceilometer-0\" (UID: \"9b27517d-03d7-421b-9875-86ed13c59563\") " pod="openstack/ceilometer-0" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.418355 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b27517d-03d7-421b-9875-86ed13c59563-run-httpd\") pod \"ceilometer-0\" (UID: \"9b27517d-03d7-421b-9875-86ed13c59563\") " pod="openstack/ceilometer-0" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.418393 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b27517d-03d7-421b-9875-86ed13c59563-config-data\") pod \"ceilometer-0\" (UID: \"9b27517d-03d7-421b-9875-86ed13c59563\") " pod="openstack/ceilometer-0" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.419501 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b27517d-03d7-421b-9875-86ed13c59563-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9b27517d-03d7-421b-9875-86ed13c59563\") " pod="openstack/ceilometer-0" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.419569 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b27517d-03d7-421b-9875-86ed13c59563-log-httpd\") pod \"ceilometer-0\" (UID: \"9b27517d-03d7-421b-9875-86ed13c59563\") " pod="openstack/ceilometer-0" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.419631 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b27517d-03d7-421b-9875-86ed13c59563-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9b27517d-03d7-421b-9875-86ed13c59563\") " pod="openstack/ceilometer-0" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.522179 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b27517d-03d7-421b-9875-86ed13c59563-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9b27517d-03d7-421b-9875-86ed13c59563\") " pod="openstack/ceilometer-0" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.522293 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b27517d-03d7-421b-9875-86ed13c59563-log-httpd\") pod \"ceilometer-0\" (UID: \"9b27517d-03d7-421b-9875-86ed13c59563\") " pod="openstack/ceilometer-0" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.522996 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b27517d-03d7-421b-9875-86ed13c59563-log-httpd\") pod \"ceilometer-0\" (UID: \"9b27517d-03d7-421b-9875-86ed13c59563\") " pod="openstack/ceilometer-0" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.523088 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b27517d-03d7-421b-9875-86ed13c59563-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9b27517d-03d7-421b-9875-86ed13c59563\") " pod="openstack/ceilometer-0" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.523496 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b27517d-03d7-421b-9875-86ed13c59563-scripts\") pod \"ceilometer-0\" (UID: \"9b27517d-03d7-421b-9875-86ed13c59563\") " pod="openstack/ceilometer-0" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.523550 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbb2t\" (UniqueName: \"kubernetes.io/projected/9b27517d-03d7-421b-9875-86ed13c59563-kube-api-access-xbb2t\") pod \"ceilometer-0\" (UID: \"9b27517d-03d7-421b-9875-86ed13c59563\") " pod="openstack/ceilometer-0" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.523603 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b27517d-03d7-421b-9875-86ed13c59563-run-httpd\") pod \"ceilometer-0\" (UID: \"9b27517d-03d7-421b-9875-86ed13c59563\") " pod="openstack/ceilometer-0" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.523649 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b27517d-03d7-421b-9875-86ed13c59563-config-data\") pod \"ceilometer-0\" (UID: \"9b27517d-03d7-421b-9875-86ed13c59563\") " pod="openstack/ceilometer-0" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.527017 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b27517d-03d7-421b-9875-86ed13c59563-run-httpd\") pod \"ceilometer-0\" (UID: \"9b27517d-03d7-421b-9875-86ed13c59563\") " pod="openstack/ceilometer-0" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.532981 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b27517d-03d7-421b-9875-86ed13c59563-scripts\") pod \"ceilometer-0\" (UID: \"9b27517d-03d7-421b-9875-86ed13c59563\") " pod="openstack/ceilometer-0" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.533382 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b27517d-03d7-421b-9875-86ed13c59563-config-data\") pod \"ceilometer-0\" (UID: \"9b27517d-03d7-421b-9875-86ed13c59563\") " pod="openstack/ceilometer-0" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.533756 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b27517d-03d7-421b-9875-86ed13c59563-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9b27517d-03d7-421b-9875-86ed13c59563\") " pod="openstack/ceilometer-0" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.547289 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b27517d-03d7-421b-9875-86ed13c59563-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9b27517d-03d7-421b-9875-86ed13c59563\") " pod="openstack/ceilometer-0" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.555065 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbb2t\" (UniqueName: \"kubernetes.io/projected/9b27517d-03d7-421b-9875-86ed13c59563-kube-api-access-xbb2t\") pod \"ceilometer-0\" (UID: \"9b27517d-03d7-421b-9875-86ed13c59563\") " pod="openstack/ceilometer-0" Jan 31 04:09:40 crc kubenswrapper[4667]: I0131 04:09:40.558628 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:09:41 crc kubenswrapper[4667]: I0131 04:09:41.079466 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:09:41 crc kubenswrapper[4667]: W0131 04:09:41.094743 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b27517d_03d7_421b_9875_86ed13c59563.slice/crio-9820834efee43697b35ca81bd415e8e3cbeb4c7f1c29015d289143041b86c20c WatchSource:0}: Error finding container 9820834efee43697b35ca81bd415e8e3cbeb4c7f1c29015d289143041b86c20c: Status 404 returned error can't find the container with id 9820834efee43697b35ca81bd415e8e3cbeb4c7f1c29015d289143041b86c20c Jan 31 04:09:41 crc kubenswrapper[4667]: I0131 04:09:41.293915 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae80f16a-2fb7-4994-b184-27c1e5fe52b6" path="/var/lib/kubelet/pods/ae80f16a-2fb7-4994-b184-27c1e5fe52b6/volumes" Jan 31 04:09:41 crc kubenswrapper[4667]: I0131 04:09:41.650855 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 04:09:41 crc kubenswrapper[4667]: I0131 04:09:41.651695 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="9cffe8ff-780a-4dad-92ee-175a0a9d6409" containerName="kube-state-metrics" containerID="cri-o://74b39cb087f94a38ff1a37f6c11923c7ec96ec48c9654afe04b71a791ad2129d" gracePeriod=30 Jan 31 04:09:42 crc kubenswrapper[4667]: I0131 04:09:42.125322 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b27517d-03d7-421b-9875-86ed13c59563","Type":"ContainerStarted","Data":"5359333fcbac8539b304a6ff51ce6afee12f9e53301cbf046f93c10be46decf5"} Jan 31 04:09:42 crc kubenswrapper[4667]: I0131 04:09:42.125788 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b27517d-03d7-421b-9875-86ed13c59563","Type":"ContainerStarted","Data":"9820834efee43697b35ca81bd415e8e3cbeb4c7f1c29015d289143041b86c20c"} Jan 31 04:09:42 crc kubenswrapper[4667]: I0131 04:09:42.145244 4667 generic.go:334] "Generic (PLEG): container finished" podID="c6974567-3bea-447a-bb8b-ced22b6d34ce" containerID="8585ef04e351d14473c07be1275ec2c6840212275304d32bbdccbfc70cb910c8" exitCode=137 Jan 31 04:09:42 crc kubenswrapper[4667]: I0131 04:09:42.145392 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86c748c4d6-2grmh" event={"ID":"c6974567-3bea-447a-bb8b-ced22b6d34ce","Type":"ContainerDied","Data":"8585ef04e351d14473c07be1275ec2c6840212275304d32bbdccbfc70cb910c8"} Jan 31 04:09:42 crc kubenswrapper[4667]: I0131 04:09:42.145444 4667 scope.go:117] "RemoveContainer" containerID="3fa239e2b62f1e7aacddff89f2ed28a743b788c82b3a5252236ff48d58158880" Jan 31 04:09:42 crc kubenswrapper[4667]: I0131 04:09:42.150456 4667 generic.go:334] "Generic (PLEG): container finished" podID="9cffe8ff-780a-4dad-92ee-175a0a9d6409" containerID="74b39cb087f94a38ff1a37f6c11923c7ec96ec48c9654afe04b71a791ad2129d" exitCode=2 Jan 31 04:09:42 crc kubenswrapper[4667]: I0131 04:09:42.150532 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9cffe8ff-780a-4dad-92ee-175a0a9d6409","Type":"ContainerDied","Data":"74b39cb087f94a38ff1a37f6c11923c7ec96ec48c9654afe04b71a791ad2129d"} Jan 31 04:09:42 crc kubenswrapper[4667]: I0131 04:09:42.164828 4667 generic.go:334] "Generic (PLEG): container finished" podID="b7f8fd18-06a0-432e-8c17-c9b432b6ca69" containerID="d51854ff784d64b2b3584b6cdda45491a29c7d1089ddf69708469cfc6e98fccc" exitCode=137 Jan 31 04:09:42 crc kubenswrapper[4667]: I0131 04:09:42.164903 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78789d8f44-5trmc" event={"ID":"b7f8fd18-06a0-432e-8c17-c9b432b6ca69","Type":"ContainerDied","Data":"d51854ff784d64b2b3584b6cdda45491a29c7d1089ddf69708469cfc6e98fccc"} Jan 31 04:09:42 crc kubenswrapper[4667]: I0131 04:09:42.191246 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 04:09:42 crc kubenswrapper[4667]: I0131 04:09:42.362928 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-276l4\" (UniqueName: \"kubernetes.io/projected/9cffe8ff-780a-4dad-92ee-175a0a9d6409-kube-api-access-276l4\") pod \"9cffe8ff-780a-4dad-92ee-175a0a9d6409\" (UID: \"9cffe8ff-780a-4dad-92ee-175a0a9d6409\") " Jan 31 04:09:42 crc kubenswrapper[4667]: I0131 04:09:42.387127 4667 scope.go:117] "RemoveContainer" containerID="75959a94e1776a7025f344a57c090542bf63fb0615110c632e65e3a8c9188b18" Jan 31 04:09:42 crc kubenswrapper[4667]: I0131 04:09:42.387148 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cffe8ff-780a-4dad-92ee-175a0a9d6409-kube-api-access-276l4" (OuterVolumeSpecName: "kube-api-access-276l4") pod "9cffe8ff-780a-4dad-92ee-175a0a9d6409" (UID: "9cffe8ff-780a-4dad-92ee-175a0a9d6409"). InnerVolumeSpecName "kube-api-access-276l4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:09:42 crc kubenswrapper[4667]: I0131 04:09:42.465514 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-276l4\" (UniqueName: \"kubernetes.io/projected/9cffe8ff-780a-4dad-92ee-175a0a9d6409-kube-api-access-276l4\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:43 crc kubenswrapper[4667]: I0131 04:09:43.178192 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78789d8f44-5trmc" event={"ID":"b7f8fd18-06a0-432e-8c17-c9b432b6ca69","Type":"ContainerStarted","Data":"856c0d14a9c006eba9b5acda21554d0a1e3d38398546c6f05a23d35e0977b245"} Jan 31 04:09:43 crc kubenswrapper[4667]: I0131 04:09:43.182011 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86c748c4d6-2grmh" event={"ID":"c6974567-3bea-447a-bb8b-ced22b6d34ce","Type":"ContainerStarted","Data":"4d94eb28096da9efc7dc4e1a7ab99c87543d8b142e7c7ff1698b7c7d17eb3cc0"} Jan 31 04:09:43 crc kubenswrapper[4667]: I0131 04:09:43.184213 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9cffe8ff-780a-4dad-92ee-175a0a9d6409","Type":"ContainerDied","Data":"eca10cf795fb75819b4b7f39f372bebd70828f104472116a3c88addc1b62e4a6"} Jan 31 04:09:43 crc kubenswrapper[4667]: I0131 04:09:43.184285 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 04:09:43 crc kubenswrapper[4667]: I0131 04:09:43.184302 4667 scope.go:117] "RemoveContainer" containerID="74b39cb087f94a38ff1a37f6c11923c7ec96ec48c9654afe04b71a791ad2129d" Jan 31 04:09:43 crc kubenswrapper[4667]: I0131 04:09:43.247882 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 04:09:43 crc kubenswrapper[4667]: I0131 04:09:43.259658 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 04:09:43 crc kubenswrapper[4667]: I0131 04:09:43.279069 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 04:09:43 crc kubenswrapper[4667]: E0131 04:09:43.279572 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cffe8ff-780a-4dad-92ee-175a0a9d6409" containerName="kube-state-metrics" Jan 31 04:09:43 crc kubenswrapper[4667]: I0131 04:09:43.279595 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cffe8ff-780a-4dad-92ee-175a0a9d6409" containerName="kube-state-metrics" Jan 31 04:09:43 crc kubenswrapper[4667]: I0131 04:09:43.279826 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cffe8ff-780a-4dad-92ee-175a0a9d6409" containerName="kube-state-metrics" Jan 31 04:09:43 crc kubenswrapper[4667]: I0131 04:09:43.280612 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 04:09:43 crc kubenswrapper[4667]: I0131 04:09:43.291874 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 31 04:09:43 crc kubenswrapper[4667]: I0131 04:09:43.292196 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 31 04:09:43 crc kubenswrapper[4667]: I0131 04:09:43.292637 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cffe8ff-780a-4dad-92ee-175a0a9d6409" path="/var/lib/kubelet/pods/9cffe8ff-780a-4dad-92ee-175a0a9d6409/volumes" Jan 31 04:09:43 crc kubenswrapper[4667]: I0131 04:09:43.303768 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 04:09:43 crc kubenswrapper[4667]: I0131 04:09:43.394234 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee717f47-2475-42f9-b4ce-25960d0fa24c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ee717f47-2475-42f9-b4ce-25960d0fa24c\") " pod="openstack/kube-state-metrics-0" Jan 31 04:09:43 crc kubenswrapper[4667]: I0131 04:09:43.398665 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ee717f47-2475-42f9-b4ce-25960d0fa24c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ee717f47-2475-42f9-b4ce-25960d0fa24c\") " pod="openstack/kube-state-metrics-0" Jan 31 04:09:43 crc kubenswrapper[4667]: I0131 04:09:43.398719 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee717f47-2475-42f9-b4ce-25960d0fa24c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ee717f47-2475-42f9-b4ce-25960d0fa24c\") " pod="openstack/kube-state-metrics-0" Jan 31 04:09:43 crc kubenswrapper[4667]: I0131 04:09:43.398883 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdjww\" (UniqueName: \"kubernetes.io/projected/ee717f47-2475-42f9-b4ce-25960d0fa24c-kube-api-access-pdjww\") pod \"kube-state-metrics-0\" (UID: \"ee717f47-2475-42f9-b4ce-25960d0fa24c\") " pod="openstack/kube-state-metrics-0" Jan 31 04:09:43 crc kubenswrapper[4667]: I0131 04:09:43.500853 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee717f47-2475-42f9-b4ce-25960d0fa24c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ee717f47-2475-42f9-b4ce-25960d0fa24c\") " pod="openstack/kube-state-metrics-0" Jan 31 04:09:43 crc kubenswrapper[4667]: I0131 04:09:43.501296 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ee717f47-2475-42f9-b4ce-25960d0fa24c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ee717f47-2475-42f9-b4ce-25960d0fa24c\") " pod="openstack/kube-state-metrics-0" Jan 31 04:09:43 crc kubenswrapper[4667]: I0131 04:09:43.501474 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee717f47-2475-42f9-b4ce-25960d0fa24c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ee717f47-2475-42f9-b4ce-25960d0fa24c\") " pod="openstack/kube-state-metrics-0" Jan 31 04:09:43 crc kubenswrapper[4667]: I0131 04:09:43.501662 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdjww\" (UniqueName: \"kubernetes.io/projected/ee717f47-2475-42f9-b4ce-25960d0fa24c-kube-api-access-pdjww\") pod \"kube-state-metrics-0\" (UID: \"ee717f47-2475-42f9-b4ce-25960d0fa24c\") " pod="openstack/kube-state-metrics-0" Jan 31 04:09:43 crc kubenswrapper[4667]: I0131 04:09:43.509994 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee717f47-2475-42f9-b4ce-25960d0fa24c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ee717f47-2475-42f9-b4ce-25960d0fa24c\") " pod="openstack/kube-state-metrics-0" Jan 31 04:09:43 crc kubenswrapper[4667]: I0131 04:09:43.510755 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ee717f47-2475-42f9-b4ce-25960d0fa24c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ee717f47-2475-42f9-b4ce-25960d0fa24c\") " pod="openstack/kube-state-metrics-0" Jan 31 04:09:43 crc kubenswrapper[4667]: I0131 04:09:43.527043 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdjww\" (UniqueName: \"kubernetes.io/projected/ee717f47-2475-42f9-b4ce-25960d0fa24c-kube-api-access-pdjww\") pod \"kube-state-metrics-0\" (UID: \"ee717f47-2475-42f9-b4ce-25960d0fa24c\") " pod="openstack/kube-state-metrics-0" Jan 31 04:09:43 crc kubenswrapper[4667]: I0131 04:09:43.547220 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee717f47-2475-42f9-b4ce-25960d0fa24c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ee717f47-2475-42f9-b4ce-25960d0fa24c\") " pod="openstack/kube-state-metrics-0" Jan 31 04:09:43 crc kubenswrapper[4667]: I0131 04:09:43.598427 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 31 04:09:44 crc kubenswrapper[4667]: I0131 04:09:44.180046 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 31 04:09:44 crc kubenswrapper[4667]: I0131 04:09:44.214981 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ee717f47-2475-42f9-b4ce-25960d0fa24c","Type":"ContainerStarted","Data":"50ea5e67dc6cd5c77d783fa9ccb9a15461fd9f9cc88aee620f7855d3c9e3c930"} Jan 31 04:09:44 crc kubenswrapper[4667]: I0131 04:09:44.239377 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b27517d-03d7-421b-9875-86ed13c59563","Type":"ContainerStarted","Data":"1c2a6f7b79b61f75b9a16a2c9852fab43ecf31936bbecda700bc44263f232029"} Jan 31 04:09:44 crc kubenswrapper[4667]: I0131 04:09:44.239422 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b27517d-03d7-421b-9875-86ed13c59563","Type":"ContainerStarted","Data":"779f06c291a842c0ffd5794149f2483bdf9acd0d6ba5bcbeba838d6621234e66"} Jan 31 04:09:45 crc kubenswrapper[4667]: I0131 04:09:45.230232 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:09:45 crc kubenswrapper[4667]: I0131 04:09:45.249616 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ee717f47-2475-42f9-b4ce-25960d0fa24c","Type":"ContainerStarted","Data":"ff1ddb51f420f288ebe8fd4aeb36cf1387bff05ba587ad226f578b28be100cc8"} Jan 31 04:09:45 crc kubenswrapper[4667]: I0131 04:09:45.249867 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 31 04:09:45 crc kubenswrapper[4667]: I0131 04:09:45.284018 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.913115582 podStartE2EDuration="2.283995027s" podCreationTimestamp="2026-01-31 04:09:43 +0000 UTC" firstStartedPulling="2026-01-31 04:09:44.192161934 +0000 UTC m=+1307.708497233" lastFinishedPulling="2026-01-31 04:09:44.563041379 +0000 UTC m=+1308.079376678" observedRunningTime="2026-01-31 04:09:45.274372212 +0000 UTC m=+1308.790707531" watchObservedRunningTime="2026-01-31 04:09:45.283995027 +0000 UTC m=+1308.800330326" Jan 31 04:09:47 crc kubenswrapper[4667]: I0131 04:09:47.270390 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b27517d-03d7-421b-9875-86ed13c59563","Type":"ContainerStarted","Data":"9dc32b1f9a21a019e1557d2ace63e4c2497b90006d960fca870dc76694dce474"} Jan 31 04:09:47 crc kubenswrapper[4667]: I0131 04:09:47.271463 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9b27517d-03d7-421b-9875-86ed13c59563" containerName="ceilometer-central-agent" containerID="cri-o://5359333fcbac8539b304a6ff51ce6afee12f9e53301cbf046f93c10be46decf5" gracePeriod=30 Jan 31 04:09:47 crc kubenswrapper[4667]: I0131 04:09:47.271880 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 04:09:47 crc kubenswrapper[4667]: I0131 04:09:47.272273 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9b27517d-03d7-421b-9875-86ed13c59563" containerName="proxy-httpd" containerID="cri-o://9dc32b1f9a21a019e1557d2ace63e4c2497b90006d960fca870dc76694dce474" gracePeriod=30 Jan 31 04:09:47 crc kubenswrapper[4667]: I0131 04:09:47.272345 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9b27517d-03d7-421b-9875-86ed13c59563" containerName="sg-core" containerID="cri-o://1c2a6f7b79b61f75b9a16a2c9852fab43ecf31936bbecda700bc44263f232029" gracePeriod=30 Jan 31 04:09:47 crc kubenswrapper[4667]: I0131 04:09:47.272388 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9b27517d-03d7-421b-9875-86ed13c59563" containerName="ceilometer-notification-agent" containerID="cri-o://779f06c291a842c0ffd5794149f2483bdf9acd0d6ba5bcbeba838d6621234e66" gracePeriod=30 Jan 31 04:09:47 crc kubenswrapper[4667]: I0131 04:09:47.336155 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.8924064459999999 podStartE2EDuration="7.336122231s" podCreationTimestamp="2026-01-31 04:09:40 +0000 UTC" firstStartedPulling="2026-01-31 04:09:41.097479265 +0000 UTC m=+1304.613814564" lastFinishedPulling="2026-01-31 04:09:46.54119505 +0000 UTC m=+1310.057530349" observedRunningTime="2026-01-31 04:09:47.312887284 +0000 UTC m=+1310.829222613" watchObservedRunningTime="2026-01-31 04:09:47.336122231 +0000 UTC m=+1310.852457530" Jan 31 04:09:48 crc kubenswrapper[4667]: I0131 04:09:48.287389 4667 generic.go:334] "Generic (PLEG): container finished" podID="9b27517d-03d7-421b-9875-86ed13c59563" containerID="1c2a6f7b79b61f75b9a16a2c9852fab43ecf31936bbecda700bc44263f232029" exitCode=2 Jan 31 04:09:48 crc kubenswrapper[4667]: I0131 04:09:48.287899 4667 generic.go:334] "Generic (PLEG): container finished" podID="9b27517d-03d7-421b-9875-86ed13c59563" containerID="779f06c291a842c0ffd5794149f2483bdf9acd0d6ba5bcbeba838d6621234e66" exitCode=0 Jan 31 04:09:48 crc kubenswrapper[4667]: I0131 04:09:48.287429 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b27517d-03d7-421b-9875-86ed13c59563","Type":"ContainerDied","Data":"1c2a6f7b79b61f75b9a16a2c9852fab43ecf31936bbecda700bc44263f232029"} Jan 31 04:09:48 crc kubenswrapper[4667]: I0131 04:09:48.287949 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b27517d-03d7-421b-9875-86ed13c59563","Type":"ContainerDied","Data":"779f06c291a842c0ffd5794149f2483bdf9acd0d6ba5bcbeba838d6621234e66"} Jan 31 04:09:51 crc kubenswrapper[4667]: I0131 04:09:51.754702 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-78789d8f44-5trmc" Jan 31 04:09:51 crc kubenswrapper[4667]: I0131 04:09:51.755555 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-78789d8f44-5trmc" Jan 31 04:09:51 crc kubenswrapper[4667]: I0131 04:09:51.843788 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-86c748c4d6-2grmh" Jan 31 04:09:51 crc kubenswrapper[4667]: I0131 04:09:51.843942 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-86c748c4d6-2grmh" Jan 31 04:09:53 crc kubenswrapper[4667]: I0131 04:09:53.617325 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 31 04:09:55 crc kubenswrapper[4667]: I0131 04:09:55.400021 4667 generic.go:334] "Generic (PLEG): container finished" podID="9b27517d-03d7-421b-9875-86ed13c59563" containerID="5359333fcbac8539b304a6ff51ce6afee12f9e53301cbf046f93c10be46decf5" exitCode=0 Jan 31 04:09:55 crc kubenswrapper[4667]: I0131 04:09:55.400098 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b27517d-03d7-421b-9875-86ed13c59563","Type":"ContainerDied","Data":"5359333fcbac8539b304a6ff51ce6afee12f9e53301cbf046f93c10be46decf5"} Jan 31 04:09:58 crc kubenswrapper[4667]: I0131 04:09:58.434124 4667 generic.go:334] "Generic (PLEG): container finished" podID="78e0b2a7-8c04-43a3-86b7-d2406c2125c7" containerID="e5369b5d2bf898d072daf88b8a2a3ad5ebb1d9b7470b0e202b2584d71f186765" exitCode=0 Jan 31 04:09:58 crc kubenswrapper[4667]: I0131 04:09:58.434234 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bhwqz" event={"ID":"78e0b2a7-8c04-43a3-86b7-d2406c2125c7","Type":"ContainerDied","Data":"e5369b5d2bf898d072daf88b8a2a3ad5ebb1d9b7470b0e202b2584d71f186765"} Jan 31 04:09:59 crc kubenswrapper[4667]: I0131 04:09:59.838596 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bhwqz" Jan 31 04:09:59 crc kubenswrapper[4667]: I0131 04:09:59.872930 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgnwn\" (UniqueName: \"kubernetes.io/projected/78e0b2a7-8c04-43a3-86b7-d2406c2125c7-kube-api-access-qgnwn\") pod \"78e0b2a7-8c04-43a3-86b7-d2406c2125c7\" (UID: \"78e0b2a7-8c04-43a3-86b7-d2406c2125c7\") " Jan 31 04:09:59 crc kubenswrapper[4667]: I0131 04:09:59.873082 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78e0b2a7-8c04-43a3-86b7-d2406c2125c7-scripts\") pod \"78e0b2a7-8c04-43a3-86b7-d2406c2125c7\" (UID: \"78e0b2a7-8c04-43a3-86b7-d2406c2125c7\") " Jan 31 04:09:59 crc kubenswrapper[4667]: I0131 04:09:59.873297 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78e0b2a7-8c04-43a3-86b7-d2406c2125c7-combined-ca-bundle\") pod \"78e0b2a7-8c04-43a3-86b7-d2406c2125c7\" (UID: \"78e0b2a7-8c04-43a3-86b7-d2406c2125c7\") " Jan 31 04:09:59 crc kubenswrapper[4667]: I0131 04:09:59.873373 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78e0b2a7-8c04-43a3-86b7-d2406c2125c7-config-data\") pod \"78e0b2a7-8c04-43a3-86b7-d2406c2125c7\" (UID: \"78e0b2a7-8c04-43a3-86b7-d2406c2125c7\") " Jan 31 04:09:59 crc kubenswrapper[4667]: I0131 04:09:59.881778 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78e0b2a7-8c04-43a3-86b7-d2406c2125c7-kube-api-access-qgnwn" (OuterVolumeSpecName: "kube-api-access-qgnwn") pod "78e0b2a7-8c04-43a3-86b7-d2406c2125c7" (UID: "78e0b2a7-8c04-43a3-86b7-d2406c2125c7"). InnerVolumeSpecName "kube-api-access-qgnwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:09:59 crc kubenswrapper[4667]: I0131 04:09:59.891135 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78e0b2a7-8c04-43a3-86b7-d2406c2125c7-scripts" (OuterVolumeSpecName: "scripts") pod "78e0b2a7-8c04-43a3-86b7-d2406c2125c7" (UID: "78e0b2a7-8c04-43a3-86b7-d2406c2125c7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:09:59 crc kubenswrapper[4667]: I0131 04:09:59.913091 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78e0b2a7-8c04-43a3-86b7-d2406c2125c7-config-data" (OuterVolumeSpecName: "config-data") pod "78e0b2a7-8c04-43a3-86b7-d2406c2125c7" (UID: "78e0b2a7-8c04-43a3-86b7-d2406c2125c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:09:59 crc kubenswrapper[4667]: I0131 04:09:59.916310 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78e0b2a7-8c04-43a3-86b7-d2406c2125c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "78e0b2a7-8c04-43a3-86b7-d2406c2125c7" (UID: "78e0b2a7-8c04-43a3-86b7-d2406c2125c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:09:59 crc kubenswrapper[4667]: I0131 04:09:59.976496 4667 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78e0b2a7-8c04-43a3-86b7-d2406c2125c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:59 crc kubenswrapper[4667]: I0131 04:09:59.976533 4667 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78e0b2a7-8c04-43a3-86b7-d2406c2125c7-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:59 crc kubenswrapper[4667]: I0131 04:09:59.976544 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgnwn\" (UniqueName: \"kubernetes.io/projected/78e0b2a7-8c04-43a3-86b7-d2406c2125c7-kube-api-access-qgnwn\") on node \"crc\" DevicePath \"\"" Jan 31 04:09:59 crc kubenswrapper[4667]: I0131 04:09:59.976554 4667 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78e0b2a7-8c04-43a3-86b7-d2406c2125c7-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:10:00 crc kubenswrapper[4667]: I0131 04:10:00.459883 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bhwqz" event={"ID":"78e0b2a7-8c04-43a3-86b7-d2406c2125c7","Type":"ContainerDied","Data":"b7cc3874c9c85777f70d54f419185bb2c318727925e4fe21490d1e4460da5aa7"} Jan 31 04:10:00 crc kubenswrapper[4667]: I0131 04:10:00.459962 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7cc3874c9c85777f70d54f419185bb2c318727925e4fe21490d1e4460da5aa7" Jan 31 04:10:00 crc kubenswrapper[4667]: I0131 04:10:00.460093 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bhwqz" Jan 31 04:10:00 crc kubenswrapper[4667]: I0131 04:10:00.604270 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 31 04:10:00 crc kubenswrapper[4667]: E0131 04:10:00.604771 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78e0b2a7-8c04-43a3-86b7-d2406c2125c7" containerName="nova-cell0-conductor-db-sync" Jan 31 04:10:00 crc kubenswrapper[4667]: I0131 04:10:00.604792 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="78e0b2a7-8c04-43a3-86b7-d2406c2125c7" containerName="nova-cell0-conductor-db-sync" Jan 31 04:10:00 crc kubenswrapper[4667]: I0131 04:10:00.605030 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="78e0b2a7-8c04-43a3-86b7-d2406c2125c7" containerName="nova-cell0-conductor-db-sync" Jan 31 04:10:00 crc kubenswrapper[4667]: I0131 04:10:00.606165 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 31 04:10:00 crc kubenswrapper[4667]: I0131 04:10:00.609574 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-55gtc" Jan 31 04:10:00 crc kubenswrapper[4667]: I0131 04:10:00.612681 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 31 04:10:00 crc kubenswrapper[4667]: I0131 04:10:00.635325 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 31 04:10:00 crc kubenswrapper[4667]: I0131 04:10:00.701528 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/841e82c7-29d0-414e-a01d-05718a83749b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"841e82c7-29d0-414e-a01d-05718a83749b\") " pod="openstack/nova-cell0-conductor-0" Jan 31 04:10:00 crc kubenswrapper[4667]: I0131 04:10:00.701761 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqp8z\" (UniqueName: \"kubernetes.io/projected/841e82c7-29d0-414e-a01d-05718a83749b-kube-api-access-sqp8z\") pod \"nova-cell0-conductor-0\" (UID: \"841e82c7-29d0-414e-a01d-05718a83749b\") " pod="openstack/nova-cell0-conductor-0" Jan 31 04:10:00 crc kubenswrapper[4667]: I0131 04:10:00.701869 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/841e82c7-29d0-414e-a01d-05718a83749b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"841e82c7-29d0-414e-a01d-05718a83749b\") " pod="openstack/nova-cell0-conductor-0" Jan 31 04:10:00 crc kubenswrapper[4667]: I0131 04:10:00.804229 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/841e82c7-29d0-414e-a01d-05718a83749b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"841e82c7-29d0-414e-a01d-05718a83749b\") " pod="openstack/nova-cell0-conductor-0" Jan 31 04:10:00 crc kubenswrapper[4667]: I0131 04:10:00.804420 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/841e82c7-29d0-414e-a01d-05718a83749b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"841e82c7-29d0-414e-a01d-05718a83749b\") " pod="openstack/nova-cell0-conductor-0" Jan 31 04:10:00 crc kubenswrapper[4667]: I0131 04:10:00.804455 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqp8z\" (UniqueName: \"kubernetes.io/projected/841e82c7-29d0-414e-a01d-05718a83749b-kube-api-access-sqp8z\") pod \"nova-cell0-conductor-0\" (UID: \"841e82c7-29d0-414e-a01d-05718a83749b\") " pod="openstack/nova-cell0-conductor-0" Jan 31 04:10:00 crc kubenswrapper[4667]: I0131 04:10:00.809777 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/841e82c7-29d0-414e-a01d-05718a83749b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"841e82c7-29d0-414e-a01d-05718a83749b\") " pod="openstack/nova-cell0-conductor-0" Jan 31 04:10:00 crc kubenswrapper[4667]: I0131 04:10:00.817595 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/841e82c7-29d0-414e-a01d-05718a83749b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"841e82c7-29d0-414e-a01d-05718a83749b\") " pod="openstack/nova-cell0-conductor-0" Jan 31 04:10:00 crc kubenswrapper[4667]: I0131 04:10:00.822426 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqp8z\" (UniqueName: \"kubernetes.io/projected/841e82c7-29d0-414e-a01d-05718a83749b-kube-api-access-sqp8z\") pod \"nova-cell0-conductor-0\" (UID: \"841e82c7-29d0-414e-a01d-05718a83749b\") " pod="openstack/nova-cell0-conductor-0" Jan 31 04:10:00 crc kubenswrapper[4667]: I0131 04:10:00.925624 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 31 04:10:01 crc kubenswrapper[4667]: I0131 04:10:01.405275 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 31 04:10:01 crc kubenswrapper[4667]: I0131 04:10:01.476235 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"841e82c7-29d0-414e-a01d-05718a83749b","Type":"ContainerStarted","Data":"4dccd26c7dac852db26be69f18dfc9cd926376ede2893db02fe6ecaf801d78a4"} Jan 31 04:10:01 crc kubenswrapper[4667]: I0131 04:10:01.756539 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-78789d8f44-5trmc" podUID="b7f8fd18-06a0-432e-8c17-c9b432b6ca69" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 31 04:10:01 crc kubenswrapper[4667]: I0131 04:10:01.844890 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-86c748c4d6-2grmh" podUID="c6974567-3bea-447a-bb8b-ced22b6d34ce" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Jan 31 04:10:02 crc kubenswrapper[4667]: I0131 04:10:02.494202 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"841e82c7-29d0-414e-a01d-05718a83749b","Type":"ContainerStarted","Data":"8fa6fc80d77c52fc560e16740d95d79eaf36de9dbc139a40ea63c28fe37dc7f7"} Jan 31 04:10:02 crc kubenswrapper[4667]: I0131 04:10:02.496453 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 31 04:10:02 crc kubenswrapper[4667]: I0131 04:10:02.531104 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.531066826 podStartE2EDuration="2.531066826s" podCreationTimestamp="2026-01-31 04:10:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:10:02.523044813 +0000 UTC m=+1326.039380112" watchObservedRunningTime="2026-01-31 04:10:02.531066826 +0000 UTC m=+1326.047402135" Jan 31 04:10:10 crc kubenswrapper[4667]: I0131 04:10:10.563493 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="9b27517d-03d7-421b-9875-86ed13c59563" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 31 04:10:10 crc kubenswrapper[4667]: I0131 04:10:10.955566 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 31 04:10:11 crc kubenswrapper[4667]: I0131 04:10:11.566114 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-ggfpz"] Jan 31 04:10:11 crc kubenswrapper[4667]: I0131 04:10:11.567728 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ggfpz" Jan 31 04:10:11 crc kubenswrapper[4667]: I0131 04:10:11.574471 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 31 04:10:11 crc kubenswrapper[4667]: I0131 04:10:11.575021 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 31 04:10:11 crc kubenswrapper[4667]: I0131 04:10:11.575326 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87224b26-43eb-4712-bef1-050a0653fb28-scripts\") pod \"nova-cell0-cell-mapping-ggfpz\" (UID: \"87224b26-43eb-4712-bef1-050a0653fb28\") " pod="openstack/nova-cell0-cell-mapping-ggfpz" Jan 31 04:10:11 crc kubenswrapper[4667]: I0131 04:10:11.575431 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87224b26-43eb-4712-bef1-050a0653fb28-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-ggfpz\" (UID: \"87224b26-43eb-4712-bef1-050a0653fb28\") " pod="openstack/nova-cell0-cell-mapping-ggfpz" Jan 31 04:10:11 crc kubenswrapper[4667]: I0131 04:10:11.575473 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87224b26-43eb-4712-bef1-050a0653fb28-config-data\") pod \"nova-cell0-cell-mapping-ggfpz\" (UID: \"87224b26-43eb-4712-bef1-050a0653fb28\") " pod="openstack/nova-cell0-cell-mapping-ggfpz" Jan 31 04:10:11 crc kubenswrapper[4667]: I0131 04:10:11.575706 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpp7s\" (UniqueName: \"kubernetes.io/projected/87224b26-43eb-4712-bef1-050a0653fb28-kube-api-access-xpp7s\") pod \"nova-cell0-cell-mapping-ggfpz\" (UID: \"87224b26-43eb-4712-bef1-050a0653fb28\") " pod="openstack/nova-cell0-cell-mapping-ggfpz" Jan 31 04:10:11 crc kubenswrapper[4667]: I0131 04:10:11.596182 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-ggfpz"] Jan 31 04:10:11 crc kubenswrapper[4667]: I0131 04:10:11.677928 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87224b26-43eb-4712-bef1-050a0653fb28-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-ggfpz\" (UID: \"87224b26-43eb-4712-bef1-050a0653fb28\") " pod="openstack/nova-cell0-cell-mapping-ggfpz" Jan 31 04:10:11 crc kubenswrapper[4667]: I0131 04:10:11.678243 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87224b26-43eb-4712-bef1-050a0653fb28-config-data\") pod \"nova-cell0-cell-mapping-ggfpz\" (UID: \"87224b26-43eb-4712-bef1-050a0653fb28\") " pod="openstack/nova-cell0-cell-mapping-ggfpz" Jan 31 04:10:11 crc kubenswrapper[4667]: I0131 04:10:11.678446 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpp7s\" (UniqueName: \"kubernetes.io/projected/87224b26-43eb-4712-bef1-050a0653fb28-kube-api-access-xpp7s\") pod \"nova-cell0-cell-mapping-ggfpz\" (UID: \"87224b26-43eb-4712-bef1-050a0653fb28\") " pod="openstack/nova-cell0-cell-mapping-ggfpz" Jan 31 04:10:11 crc kubenswrapper[4667]: I0131 04:10:11.678543 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87224b26-43eb-4712-bef1-050a0653fb28-scripts\") pod \"nova-cell0-cell-mapping-ggfpz\" (UID: \"87224b26-43eb-4712-bef1-050a0653fb28\") " pod="openstack/nova-cell0-cell-mapping-ggfpz" Jan 31 04:10:11 crc kubenswrapper[4667]: I0131 04:10:11.685996 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87224b26-43eb-4712-bef1-050a0653fb28-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-ggfpz\" (UID: \"87224b26-43eb-4712-bef1-050a0653fb28\") " pod="openstack/nova-cell0-cell-mapping-ggfpz" Jan 31 04:10:11 crc kubenswrapper[4667]: I0131 04:10:11.691648 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87224b26-43eb-4712-bef1-050a0653fb28-scripts\") pod \"nova-cell0-cell-mapping-ggfpz\" (UID: \"87224b26-43eb-4712-bef1-050a0653fb28\") " pod="openstack/nova-cell0-cell-mapping-ggfpz" Jan 31 04:10:11 crc kubenswrapper[4667]: I0131 04:10:11.705971 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87224b26-43eb-4712-bef1-050a0653fb28-config-data\") pod \"nova-cell0-cell-mapping-ggfpz\" (UID: \"87224b26-43eb-4712-bef1-050a0653fb28\") " pod="openstack/nova-cell0-cell-mapping-ggfpz" Jan 31 04:10:11 crc kubenswrapper[4667]: I0131 04:10:11.712651 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpp7s\" (UniqueName: \"kubernetes.io/projected/87224b26-43eb-4712-bef1-050a0653fb28-kube-api-access-xpp7s\") pod \"nova-cell0-cell-mapping-ggfpz\" (UID: \"87224b26-43eb-4712-bef1-050a0653fb28\") " pod="openstack/nova-cell0-cell-mapping-ggfpz" Jan 31 04:10:11 crc kubenswrapper[4667]: I0131 04:10:11.798498 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 31 04:10:11 crc kubenswrapper[4667]: I0131 04:10:11.800406 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 04:10:11 crc kubenswrapper[4667]: I0131 04:10:11.802557 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 31 04:10:11 crc kubenswrapper[4667]: I0131 04:10:11.814936 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 31 04:10:11 crc kubenswrapper[4667]: I0131 04:10:11.822438 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 04:10:11 crc kubenswrapper[4667]: I0131 04:10:11.844193 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 31 04:10:11 crc kubenswrapper[4667]: I0131 04:10:11.871570 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 04:10:11 crc kubenswrapper[4667]: I0131 04:10:11.886696 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66b568e7-a37b-4eae-b00c-4000f8a51517-config-data\") pod \"nova-metadata-0\" (UID: \"66b568e7-a37b-4eae-b00c-4000f8a51517\") " pod="openstack/nova-metadata-0" Jan 31 04:10:11 crc kubenswrapper[4667]: I0131 04:10:11.886983 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66b568e7-a37b-4eae-b00c-4000f8a51517-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"66b568e7-a37b-4eae-b00c-4000f8a51517\") " pod="openstack/nova-metadata-0" Jan 31 04:10:11 crc kubenswrapper[4667]: I0131 04:10:11.887034 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66b568e7-a37b-4eae-b00c-4000f8a51517-logs\") pod \"nova-metadata-0\" (UID: \"66b568e7-a37b-4eae-b00c-4000f8a51517\") " pod="openstack/nova-metadata-0" Jan 31 04:10:11 crc kubenswrapper[4667]: I0131 04:10:11.887083 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rbrp\" (UniqueName: \"kubernetes.io/projected/66b568e7-a37b-4eae-b00c-4000f8a51517-kube-api-access-6rbrp\") pod \"nova-metadata-0\" (UID: \"66b568e7-a37b-4eae-b00c-4000f8a51517\") " pod="openstack/nova-metadata-0" Jan 31 04:10:11 crc kubenswrapper[4667]: I0131 04:10:11.921580 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ggfpz" Jan 31 04:10:11 crc kubenswrapper[4667]: I0131 04:10:11.929106 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.029305 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1b9d2c1-5919-4939-8523-445092cad2a8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e1b9d2c1-5919-4939-8523-445092cad2a8\") " pod="openstack/nova-api-0" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.029678 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1b9d2c1-5919-4939-8523-445092cad2a8-logs\") pod \"nova-api-0\" (UID: \"e1b9d2c1-5919-4939-8523-445092cad2a8\") " pod="openstack/nova-api-0" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.029855 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66b568e7-a37b-4eae-b00c-4000f8a51517-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"66b568e7-a37b-4eae-b00c-4000f8a51517\") " pod="openstack/nova-metadata-0" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.029940 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66b568e7-a37b-4eae-b00c-4000f8a51517-logs\") pod \"nova-metadata-0\" (UID: \"66b568e7-a37b-4eae-b00c-4000f8a51517\") " pod="openstack/nova-metadata-0" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.030020 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rbrp\" (UniqueName: \"kubernetes.io/projected/66b568e7-a37b-4eae-b00c-4000f8a51517-kube-api-access-6rbrp\") pod \"nova-metadata-0\" (UID: \"66b568e7-a37b-4eae-b00c-4000f8a51517\") " pod="openstack/nova-metadata-0" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.030069 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc2pm\" (UniqueName: \"kubernetes.io/projected/e1b9d2c1-5919-4939-8523-445092cad2a8-kube-api-access-lc2pm\") pod \"nova-api-0\" (UID: \"e1b9d2c1-5919-4939-8523-445092cad2a8\") " pod="openstack/nova-api-0" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.030173 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66b568e7-a37b-4eae-b00c-4000f8a51517-config-data\") pod \"nova-metadata-0\" (UID: \"66b568e7-a37b-4eae-b00c-4000f8a51517\") " pod="openstack/nova-metadata-0" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.030413 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1b9d2c1-5919-4939-8523-445092cad2a8-config-data\") pod \"nova-api-0\" (UID: \"e1b9d2c1-5919-4939-8523-445092cad2a8\") " pod="openstack/nova-api-0" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.031233 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66b568e7-a37b-4eae-b00c-4000f8a51517-logs\") pod \"nova-metadata-0\" (UID: \"66b568e7-a37b-4eae-b00c-4000f8a51517\") " pod="openstack/nova-metadata-0" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.046038 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66b568e7-a37b-4eae-b00c-4000f8a51517-config-data\") pod \"nova-metadata-0\" (UID: \"66b568e7-a37b-4eae-b00c-4000f8a51517\") " pod="openstack/nova-metadata-0" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.049409 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66b568e7-a37b-4eae-b00c-4000f8a51517-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"66b568e7-a37b-4eae-b00c-4000f8a51517\") " pod="openstack/nova-metadata-0" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.079534 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rbrp\" (UniqueName: \"kubernetes.io/projected/66b568e7-a37b-4eae-b00c-4000f8a51517-kube-api-access-6rbrp\") pod \"nova-metadata-0\" (UID: \"66b568e7-a37b-4eae-b00c-4000f8a51517\") " pod="openstack/nova-metadata-0" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.118384 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-5mh27"] Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.119953 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-5mh27" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.131243 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1b9d2c1-5919-4939-8523-445092cad2a8-logs\") pod \"nova-api-0\" (UID: \"e1b9d2c1-5919-4939-8523-445092cad2a8\") " pod="openstack/nova-api-0" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.131310 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc2pm\" (UniqueName: \"kubernetes.io/projected/e1b9d2c1-5919-4939-8523-445092cad2a8-kube-api-access-lc2pm\") pod \"nova-api-0\" (UID: \"e1b9d2c1-5919-4939-8523-445092cad2a8\") " pod="openstack/nova-api-0" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.131338 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/06e48307-baba-472e-b0e1-81fa37a6cd22-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-5mh27\" (UID: \"06e48307-baba-472e-b0e1-81fa37a6cd22\") " pod="openstack/dnsmasq-dns-757b4f8459-5mh27" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.131363 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06e48307-baba-472e-b0e1-81fa37a6cd22-dns-svc\") pod \"dnsmasq-dns-757b4f8459-5mh27\" (UID: \"06e48307-baba-472e-b0e1-81fa37a6cd22\") " pod="openstack/dnsmasq-dns-757b4f8459-5mh27" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.131389 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06e48307-baba-472e-b0e1-81fa37a6cd22-config\") pod \"dnsmasq-dns-757b4f8459-5mh27\" (UID: \"06e48307-baba-472e-b0e1-81fa37a6cd22\") " pod="openstack/dnsmasq-dns-757b4f8459-5mh27" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.131419 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/06e48307-baba-472e-b0e1-81fa37a6cd22-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-5mh27\" (UID: \"06e48307-baba-472e-b0e1-81fa37a6cd22\") " pod="openstack/dnsmasq-dns-757b4f8459-5mh27" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.131455 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1b9d2c1-5919-4939-8523-445092cad2a8-config-data\") pod \"nova-api-0\" (UID: \"e1b9d2c1-5919-4939-8523-445092cad2a8\") " pod="openstack/nova-api-0" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.131481 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06e48307-baba-472e-b0e1-81fa37a6cd22-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-5mh27\" (UID: \"06e48307-baba-472e-b0e1-81fa37a6cd22\") " pod="openstack/dnsmasq-dns-757b4f8459-5mh27" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.131548 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp552\" (UniqueName: \"kubernetes.io/projected/06e48307-baba-472e-b0e1-81fa37a6cd22-kube-api-access-jp552\") pod \"dnsmasq-dns-757b4f8459-5mh27\" (UID: \"06e48307-baba-472e-b0e1-81fa37a6cd22\") " pod="openstack/dnsmasq-dns-757b4f8459-5mh27" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.131606 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1b9d2c1-5919-4939-8523-445092cad2a8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e1b9d2c1-5919-4939-8523-445092cad2a8\") " pod="openstack/nova-api-0" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.138073 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1b9d2c1-5919-4939-8523-445092cad2a8-logs\") pod \"nova-api-0\" (UID: \"e1b9d2c1-5919-4939-8523-445092cad2a8\") " pod="openstack/nova-api-0" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.139505 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-5mh27"] Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.145228 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1b9d2c1-5919-4939-8523-445092cad2a8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e1b9d2c1-5919-4939-8523-445092cad2a8\") " pod="openstack/nova-api-0" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.155788 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1b9d2c1-5919-4939-8523-445092cad2a8-config-data\") pod \"nova-api-0\" (UID: \"e1b9d2c1-5919-4939-8523-445092cad2a8\") " pod="openstack/nova-api-0" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.166326 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.193623 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc2pm\" (UniqueName: \"kubernetes.io/projected/e1b9d2c1-5919-4939-8523-445092cad2a8-kube-api-access-lc2pm\") pod \"nova-api-0\" (UID: \"e1b9d2c1-5919-4939-8523-445092cad2a8\") " pod="openstack/nova-api-0" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.205754 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.207213 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.233927 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.255773 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06e48307-baba-472e-b0e1-81fa37a6cd22-config\") pod \"dnsmasq-dns-757b4f8459-5mh27\" (UID: \"06e48307-baba-472e-b0e1-81fa37a6cd22\") " pod="openstack/dnsmasq-dns-757b4f8459-5mh27" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.265818 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/06e48307-baba-472e-b0e1-81fa37a6cd22-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-5mh27\" (UID: \"06e48307-baba-472e-b0e1-81fa37a6cd22\") " pod="openstack/dnsmasq-dns-757b4f8459-5mh27" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.266003 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06e48307-baba-472e-b0e1-81fa37a6cd22-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-5mh27\" (UID: \"06e48307-baba-472e-b0e1-81fa37a6cd22\") " pod="openstack/dnsmasq-dns-757b4f8459-5mh27" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.266079 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp552\" (UniqueName: \"kubernetes.io/projected/06e48307-baba-472e-b0e1-81fa37a6cd22-kube-api-access-jp552\") pod \"dnsmasq-dns-757b4f8459-5mh27\" (UID: \"06e48307-baba-472e-b0e1-81fa37a6cd22\") " pod="openstack/dnsmasq-dns-757b4f8459-5mh27" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.266472 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/06e48307-baba-472e-b0e1-81fa37a6cd22-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-5mh27\" (UID: \"06e48307-baba-472e-b0e1-81fa37a6cd22\") " pod="openstack/dnsmasq-dns-757b4f8459-5mh27" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.266509 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06e48307-baba-472e-b0e1-81fa37a6cd22-dns-svc\") pod \"dnsmasq-dns-757b4f8459-5mh27\" (UID: \"06e48307-baba-472e-b0e1-81fa37a6cd22\") " pod="openstack/dnsmasq-dns-757b4f8459-5mh27" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.267488 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06e48307-baba-472e-b0e1-81fa37a6cd22-dns-svc\") pod \"dnsmasq-dns-757b4f8459-5mh27\" (UID: \"06e48307-baba-472e-b0e1-81fa37a6cd22\") " pod="openstack/dnsmasq-dns-757b4f8459-5mh27" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.278366 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06e48307-baba-472e-b0e1-81fa37a6cd22-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-5mh27\" (UID: \"06e48307-baba-472e-b0e1-81fa37a6cd22\") " pod="openstack/dnsmasq-dns-757b4f8459-5mh27" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.328736 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/06e48307-baba-472e-b0e1-81fa37a6cd22-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-5mh27\" (UID: \"06e48307-baba-472e-b0e1-81fa37a6cd22\") " pod="openstack/dnsmasq-dns-757b4f8459-5mh27" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.332635 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06e48307-baba-472e-b0e1-81fa37a6cd22-config\") pod \"dnsmasq-dns-757b4f8459-5mh27\" (UID: \"06e48307-baba-472e-b0e1-81fa37a6cd22\") " pod="openstack/dnsmasq-dns-757b4f8459-5mh27" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.347627 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/06e48307-baba-472e-b0e1-81fa37a6cd22-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-5mh27\" (UID: \"06e48307-baba-472e-b0e1-81fa37a6cd22\") " pod="openstack/dnsmasq-dns-757b4f8459-5mh27" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.430618 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8e6271b-34bd-43ad-9505-c4eff960694a-config-data\") pod \"nova-scheduler-0\" (UID: \"f8e6271b-34bd-43ad-9505-c4eff960694a\") " pod="openstack/nova-scheduler-0" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.430924 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b2bv\" (UniqueName: \"kubernetes.io/projected/f8e6271b-34bd-43ad-9505-c4eff960694a-kube-api-access-7b2bv\") pod \"nova-scheduler-0\" (UID: \"f8e6271b-34bd-43ad-9505-c4eff960694a\") " pod="openstack/nova-scheduler-0" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.430974 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8e6271b-34bd-43ad-9505-c4eff960694a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f8e6271b-34bd-43ad-9505-c4eff960694a\") " pod="openstack/nova-scheduler-0" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.445559 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.446756 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp552\" (UniqueName: \"kubernetes.io/projected/06e48307-baba-472e-b0e1-81fa37a6cd22-kube-api-access-jp552\") pod \"dnsmasq-dns-757b4f8459-5mh27\" (UID: \"06e48307-baba-472e-b0e1-81fa37a6cd22\") " pod="openstack/dnsmasq-dns-757b4f8459-5mh27" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.471567 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-5mh27" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.485369 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.548808 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8e6271b-34bd-43ad-9505-c4eff960694a-config-data\") pod \"nova-scheduler-0\" (UID: \"f8e6271b-34bd-43ad-9505-c4eff960694a\") " pod="openstack/nova-scheduler-0" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.548983 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b2bv\" (UniqueName: \"kubernetes.io/projected/f8e6271b-34bd-43ad-9505-c4eff960694a-kube-api-access-7b2bv\") pod \"nova-scheduler-0\" (UID: \"f8e6271b-34bd-43ad-9505-c4eff960694a\") " pod="openstack/nova-scheduler-0" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.549019 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8e6271b-34bd-43ad-9505-c4eff960694a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f8e6271b-34bd-43ad-9505-c4eff960694a\") " pod="openstack/nova-scheduler-0" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.561191 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8e6271b-34bd-43ad-9505-c4eff960694a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f8e6271b-34bd-43ad-9505-c4eff960694a\") " pod="openstack/nova-scheduler-0" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.565085 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8e6271b-34bd-43ad-9505-c4eff960694a-config-data\") pod \"nova-scheduler-0\" (UID: \"f8e6271b-34bd-43ad-9505-c4eff960694a\") " pod="openstack/nova-scheduler-0" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.573763 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.576224 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.584168 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b2bv\" (UniqueName: \"kubernetes.io/projected/f8e6271b-34bd-43ad-9505-c4eff960694a-kube-api-access-7b2bv\") pod \"nova-scheduler-0\" (UID: \"f8e6271b-34bd-43ad-9505-c4eff960694a\") " pod="openstack/nova-scheduler-0" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.593350 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.626428 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.651287 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b2ce69c-d98e-40a4-b6e9-be00a97ca5af-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3b2ce69c-d98e-40a4-b6e9-be00a97ca5af\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.651523 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b2ce69c-d98e-40a4-b6e9-be00a97ca5af-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3b2ce69c-d98e-40a4-b6e9-be00a97ca5af\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.651554 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpzf6\" (UniqueName: \"kubernetes.io/projected/3b2ce69c-d98e-40a4-b6e9-be00a97ca5af-kube-api-access-rpzf6\") pod \"nova-cell1-novncproxy-0\" (UID: \"3b2ce69c-d98e-40a4-b6e9-be00a97ca5af\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.753039 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b2ce69c-d98e-40a4-b6e9-be00a97ca5af-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3b2ce69c-d98e-40a4-b6e9-be00a97ca5af\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.753164 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b2ce69c-d98e-40a4-b6e9-be00a97ca5af-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3b2ce69c-d98e-40a4-b6e9-be00a97ca5af\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.753192 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpzf6\" (UniqueName: \"kubernetes.io/projected/3b2ce69c-d98e-40a4-b6e9-be00a97ca5af-kube-api-access-rpzf6\") pod \"nova-cell1-novncproxy-0\" (UID: \"3b2ce69c-d98e-40a4-b6e9-be00a97ca5af\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.761009 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b2ce69c-d98e-40a4-b6e9-be00a97ca5af-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3b2ce69c-d98e-40a4-b6e9-be00a97ca5af\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.779184 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b2ce69c-d98e-40a4-b6e9-be00a97ca5af-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3b2ce69c-d98e-40a4-b6e9-be00a97ca5af\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.796686 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpzf6\" (UniqueName: \"kubernetes.io/projected/3b2ce69c-d98e-40a4-b6e9-be00a97ca5af-kube-api-access-rpzf6\") pod \"nova-cell1-novncproxy-0\" (UID: \"3b2ce69c-d98e-40a4-b6e9-be00a97ca5af\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.862551 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.962757 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:10:12 crc kubenswrapper[4667]: I0131 04:10:12.965640 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 04:10:13 crc kubenswrapper[4667]: I0131 04:10:13.040339 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-ggfpz"] Jan 31 04:10:13 crc kubenswrapper[4667]: I0131 04:10:13.473748 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-5mh27"] Jan 31 04:10:13 crc kubenswrapper[4667]: I0131 04:10:13.505415 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 04:10:13 crc kubenswrapper[4667]: I0131 04:10:13.691075 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"66b568e7-a37b-4eae-b00c-4000f8a51517","Type":"ContainerStarted","Data":"a46635cc1f871707ae61fa83c7a3c708af90a691ed1c6539a1a864cae10ed7e6"} Jan 31 04:10:13 crc kubenswrapper[4667]: I0131 04:10:13.699822 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-5mh27" event={"ID":"06e48307-baba-472e-b0e1-81fa37a6cd22","Type":"ContainerStarted","Data":"904d20444037e8f5571b43373602986fec06dcd925c77290704fc0b9a9212878"} Jan 31 04:10:13 crc kubenswrapper[4667]: I0131 04:10:13.703083 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e1b9d2c1-5919-4939-8523-445092cad2a8","Type":"ContainerStarted","Data":"b7723d1da6c495d13de0bad6ad3de664dea77b7768b28d649e649a77614a3491"} Jan 31 04:10:13 crc kubenswrapper[4667]: I0131 04:10:13.706363 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ggfpz" event={"ID":"87224b26-43eb-4712-bef1-050a0653fb28","Type":"ContainerStarted","Data":"1b424600dcd6e8c5d0b6af4339dd1cf081319bc24abd475fce83207664fd5fec"} Jan 31 04:10:14 crc kubenswrapper[4667]: I0131 04:10:14.070154 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 04:10:14 crc kubenswrapper[4667]: I0131 04:10:14.130580 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 04:10:14 crc kubenswrapper[4667]: I0131 04:10:14.430925 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-w94fr"] Jan 31 04:10:14 crc kubenswrapper[4667]: I0131 04:10:14.433198 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-w94fr" Jan 31 04:10:14 crc kubenswrapper[4667]: I0131 04:10:14.456088 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 31 04:10:14 crc kubenswrapper[4667]: I0131 04:10:14.464417 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 31 04:10:14 crc kubenswrapper[4667]: I0131 04:10:14.465871 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-w94fr"] Jan 31 04:10:14 crc kubenswrapper[4667]: I0131 04:10:14.547368 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed98e28a-5baf-4f7a-aafb-b03916785619-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-w94fr\" (UID: \"ed98e28a-5baf-4f7a-aafb-b03916785619\") " pod="openstack/nova-cell1-conductor-db-sync-w94fr" Jan 31 04:10:14 crc kubenswrapper[4667]: I0131 04:10:14.547636 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed98e28a-5baf-4f7a-aafb-b03916785619-config-data\") pod \"nova-cell1-conductor-db-sync-w94fr\" (UID: \"ed98e28a-5baf-4f7a-aafb-b03916785619\") " pod="openstack/nova-cell1-conductor-db-sync-w94fr" Jan 31 04:10:14 crc kubenswrapper[4667]: I0131 04:10:14.547889 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw9c8\" (UniqueName: \"kubernetes.io/projected/ed98e28a-5baf-4f7a-aafb-b03916785619-kube-api-access-zw9c8\") pod \"nova-cell1-conductor-db-sync-w94fr\" (UID: \"ed98e28a-5baf-4f7a-aafb-b03916785619\") " pod="openstack/nova-cell1-conductor-db-sync-w94fr" Jan 31 04:10:14 crc kubenswrapper[4667]: I0131 04:10:14.548169 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed98e28a-5baf-4f7a-aafb-b03916785619-scripts\") pod \"nova-cell1-conductor-db-sync-w94fr\" (UID: \"ed98e28a-5baf-4f7a-aafb-b03916785619\") " pod="openstack/nova-cell1-conductor-db-sync-w94fr" Jan 31 04:10:14 crc kubenswrapper[4667]: I0131 04:10:14.650308 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed98e28a-5baf-4f7a-aafb-b03916785619-scripts\") pod \"nova-cell1-conductor-db-sync-w94fr\" (UID: \"ed98e28a-5baf-4f7a-aafb-b03916785619\") " pod="openstack/nova-cell1-conductor-db-sync-w94fr" Jan 31 04:10:14 crc kubenswrapper[4667]: I0131 04:10:14.650358 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed98e28a-5baf-4f7a-aafb-b03916785619-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-w94fr\" (UID: \"ed98e28a-5baf-4f7a-aafb-b03916785619\") " pod="openstack/nova-cell1-conductor-db-sync-w94fr" Jan 31 04:10:14 crc kubenswrapper[4667]: I0131 04:10:14.651534 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed98e28a-5baf-4f7a-aafb-b03916785619-config-data\") pod \"nova-cell1-conductor-db-sync-w94fr\" (UID: \"ed98e28a-5baf-4f7a-aafb-b03916785619\") " pod="openstack/nova-cell1-conductor-db-sync-w94fr" Jan 31 04:10:14 crc kubenswrapper[4667]: I0131 04:10:14.651638 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw9c8\" (UniqueName: \"kubernetes.io/projected/ed98e28a-5baf-4f7a-aafb-b03916785619-kube-api-access-zw9c8\") pod \"nova-cell1-conductor-db-sync-w94fr\" (UID: \"ed98e28a-5baf-4f7a-aafb-b03916785619\") " pod="openstack/nova-cell1-conductor-db-sync-w94fr" Jan 31 04:10:14 crc kubenswrapper[4667]: I0131 04:10:14.660026 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed98e28a-5baf-4f7a-aafb-b03916785619-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-w94fr\" (UID: \"ed98e28a-5baf-4f7a-aafb-b03916785619\") " pod="openstack/nova-cell1-conductor-db-sync-w94fr" Jan 31 04:10:14 crc kubenswrapper[4667]: I0131 04:10:14.660707 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed98e28a-5baf-4f7a-aafb-b03916785619-scripts\") pod \"nova-cell1-conductor-db-sync-w94fr\" (UID: \"ed98e28a-5baf-4f7a-aafb-b03916785619\") " pod="openstack/nova-cell1-conductor-db-sync-w94fr" Jan 31 04:10:14 crc kubenswrapper[4667]: I0131 04:10:14.662992 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed98e28a-5baf-4f7a-aafb-b03916785619-config-data\") pod \"nova-cell1-conductor-db-sync-w94fr\" (UID: \"ed98e28a-5baf-4f7a-aafb-b03916785619\") " pod="openstack/nova-cell1-conductor-db-sync-w94fr" Jan 31 04:10:14 crc kubenswrapper[4667]: I0131 04:10:14.671945 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw9c8\" (UniqueName: \"kubernetes.io/projected/ed98e28a-5baf-4f7a-aafb-b03916785619-kube-api-access-zw9c8\") pod \"nova-cell1-conductor-db-sync-w94fr\" (UID: \"ed98e28a-5baf-4f7a-aafb-b03916785619\") " pod="openstack/nova-cell1-conductor-db-sync-w94fr" Jan 31 04:10:14 crc kubenswrapper[4667]: I0131 04:10:14.771207 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-w94fr" Jan 31 04:10:14 crc kubenswrapper[4667]: I0131 04:10:14.786103 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3b2ce69c-d98e-40a4-b6e9-be00a97ca5af","Type":"ContainerStarted","Data":"22e5261a75fb26e7c42c5bcc924b67dfa5b2bd1eae95adce64df57abf689ff29"} Jan 31 04:10:14 crc kubenswrapper[4667]: I0131 04:10:14.800848 4667 generic.go:334] "Generic (PLEG): container finished" podID="06e48307-baba-472e-b0e1-81fa37a6cd22" containerID="ad269841ab9261d1bb0e33c293c60208eafa98ba9c660c5c8130e7539581c581" exitCode=0 Jan 31 04:10:14 crc kubenswrapper[4667]: I0131 04:10:14.800964 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-5mh27" event={"ID":"06e48307-baba-472e-b0e1-81fa37a6cd22","Type":"ContainerDied","Data":"ad269841ab9261d1bb0e33c293c60208eafa98ba9c660c5c8130e7539581c581"} Jan 31 04:10:14 crc kubenswrapper[4667]: I0131 04:10:14.815187 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f8e6271b-34bd-43ad-9505-c4eff960694a","Type":"ContainerStarted","Data":"6ab9dcaca696af56d6bf79b551c5b132adfd1bbf4b97c930adabd056026f0a31"} Jan 31 04:10:14 crc kubenswrapper[4667]: I0131 04:10:14.851783 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ggfpz" event={"ID":"87224b26-43eb-4712-bef1-050a0653fb28","Type":"ContainerStarted","Data":"38f08cd67983bd4889429f7af6614e1b16f81cb85fa950741834b9eed02dbd53"} Jan 31 04:10:14 crc kubenswrapper[4667]: I0131 04:10:14.924171 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-ggfpz" podStartSLOduration=3.924142734 podStartE2EDuration="3.924142734s" podCreationTimestamp="2026-01-31 04:10:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:10:14.878322477 +0000 UTC m=+1338.394657766" watchObservedRunningTime="2026-01-31 04:10:14.924142734 +0000 UTC m=+1338.440478033" Jan 31 04:10:15 crc kubenswrapper[4667]: I0131 04:10:15.640957 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-w94fr"] Jan 31 04:10:15 crc kubenswrapper[4667]: I0131 04:10:15.874957 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-5mh27" event={"ID":"06e48307-baba-472e-b0e1-81fa37a6cd22","Type":"ContainerStarted","Data":"0174cf68decba0f0779df25a81d026bbfeaa050afe5528356881d0b95c728eec"} Jan 31 04:10:15 crc kubenswrapper[4667]: I0131 04:10:15.876825 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-5mh27" Jan 31 04:10:15 crc kubenswrapper[4667]: I0131 04:10:15.886504 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-w94fr" event={"ID":"ed98e28a-5baf-4f7a-aafb-b03916785619","Type":"ContainerStarted","Data":"ea20f5b3f3a8de95ec11eef00cbf83d7de8cf6b9486e0c2a50b06ff7d9dd5e1f"} Jan 31 04:10:15 crc kubenswrapper[4667]: I0131 04:10:15.913877 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-5mh27" podStartSLOduration=3.913854755 podStartE2EDuration="3.913854755s" podCreationTimestamp="2026-01-31 04:10:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:10:15.902458302 +0000 UTC m=+1339.418793601" watchObservedRunningTime="2026-01-31 04:10:15.913854755 +0000 UTC m=+1339.430190054" Jan 31 04:10:16 crc kubenswrapper[4667]: I0131 04:10:16.309583 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 04:10:16 crc kubenswrapper[4667]: I0131 04:10:16.318467 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 04:10:16 crc kubenswrapper[4667]: I0131 04:10:16.760302 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-78789d8f44-5trmc" podUID="b7f8fd18-06a0-432e-8c17-c9b432b6ca69" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 04:10:16 crc kubenswrapper[4667]: I0131 04:10:16.849173 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-86c748c4d6-2grmh" podUID="c6974567-3bea-447a-bb8b-ced22b6d34ce" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 04:10:16 crc kubenswrapper[4667]: I0131 04:10:16.901371 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-w94fr" event={"ID":"ed98e28a-5baf-4f7a-aafb-b03916785619","Type":"ContainerStarted","Data":"56a297688de57dc27497a4f04bd410ffa1654c2e35b421ce26c69749805b19b3"} Jan 31 04:10:16 crc kubenswrapper[4667]: I0131 04:10:16.927125 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-w94fr" podStartSLOduration=2.9271020119999998 podStartE2EDuration="2.927102012s" podCreationTimestamp="2026-01-31 04:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:10:16.918631017 +0000 UTC m=+1340.434966326" watchObservedRunningTime="2026-01-31 04:10:16.927102012 +0000 UTC m=+1340.443437321" Jan 31 04:10:17 crc kubenswrapper[4667]: W0131 04:10:17.407389 4667 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06e48307_baba_472e_b0e1_81fa37a6cd22.slice/crio-conmon-ad269841ab9261d1bb0e33c293c60208eafa98ba9c660c5c8130e7539581c581.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06e48307_baba_472e_b0e1_81fa37a6cd22.slice/crio-conmon-ad269841ab9261d1bb0e33c293c60208eafa98ba9c660c5c8130e7539581c581.scope: no such file or directory Jan 31 04:10:17 crc kubenswrapper[4667]: W0131 04:10:17.407478 4667 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06e48307_baba_472e_b0e1_81fa37a6cd22.slice/crio-ad269841ab9261d1bb0e33c293c60208eafa98ba9c660c5c8130e7539581c581.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06e48307_baba_472e_b0e1_81fa37a6cd22.slice/crio-ad269841ab9261d1bb0e33c293c60208eafa98ba9c660c5c8130e7539581c581.scope: no such file or directory Jan 31 04:10:17 crc kubenswrapper[4667]: E0131 04:10:17.652112 4667 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b27517d_03d7_421b_9875_86ed13c59563.slice/crio-9dc32b1f9a21a019e1557d2ace63e4c2497b90006d960fca870dc76694dce474.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b27517d_03d7_421b_9875_86ed13c59563.slice/crio-conmon-9dc32b1f9a21a019e1557d2ace63e4c2497b90006d960fca870dc76694dce474.scope\": RecentStats: unable to find data in memory cache]" Jan 31 04:10:17 crc kubenswrapper[4667]: I0131 04:10:17.919533 4667 generic.go:334] "Generic (PLEG): container finished" podID="9b27517d-03d7-421b-9875-86ed13c59563" containerID="9dc32b1f9a21a019e1557d2ace63e4c2497b90006d960fca870dc76694dce474" exitCode=137 Jan 31 04:10:17 crc kubenswrapper[4667]: I0131 04:10:17.919708 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b27517d-03d7-421b-9875-86ed13c59563","Type":"ContainerDied","Data":"9dc32b1f9a21a019e1557d2ace63e4c2497b90006d960fca870dc76694dce474"} Jan 31 04:10:18 crc kubenswrapper[4667]: I0131 04:10:18.880179 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:10:18 crc kubenswrapper[4667]: I0131 04:10:18.944538 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbb2t\" (UniqueName: \"kubernetes.io/projected/9b27517d-03d7-421b-9875-86ed13c59563-kube-api-access-xbb2t\") pod \"9b27517d-03d7-421b-9875-86ed13c59563\" (UID: \"9b27517d-03d7-421b-9875-86ed13c59563\") " Jan 31 04:10:18 crc kubenswrapper[4667]: I0131 04:10:18.944630 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b27517d-03d7-421b-9875-86ed13c59563-config-data\") pod \"9b27517d-03d7-421b-9875-86ed13c59563\" (UID: \"9b27517d-03d7-421b-9875-86ed13c59563\") " Jan 31 04:10:18 crc kubenswrapper[4667]: I0131 04:10:18.945622 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b27517d-03d7-421b-9875-86ed13c59563-sg-core-conf-yaml\") pod \"9b27517d-03d7-421b-9875-86ed13c59563\" (UID: \"9b27517d-03d7-421b-9875-86ed13c59563\") " Jan 31 04:10:18 crc kubenswrapper[4667]: I0131 04:10:18.945821 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b27517d-03d7-421b-9875-86ed13c59563-scripts\") pod \"9b27517d-03d7-421b-9875-86ed13c59563\" (UID: \"9b27517d-03d7-421b-9875-86ed13c59563\") " Jan 31 04:10:18 crc kubenswrapper[4667]: I0131 04:10:18.945901 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b27517d-03d7-421b-9875-86ed13c59563-run-httpd\") pod \"9b27517d-03d7-421b-9875-86ed13c59563\" (UID: \"9b27517d-03d7-421b-9875-86ed13c59563\") " Jan 31 04:10:18 crc kubenswrapper[4667]: I0131 04:10:18.946005 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b27517d-03d7-421b-9875-86ed13c59563-combined-ca-bundle\") pod \"9b27517d-03d7-421b-9875-86ed13c59563\" (UID: \"9b27517d-03d7-421b-9875-86ed13c59563\") " Jan 31 04:10:18 crc kubenswrapper[4667]: I0131 04:10:18.946102 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b27517d-03d7-421b-9875-86ed13c59563-log-httpd\") pod \"9b27517d-03d7-421b-9875-86ed13c59563\" (UID: \"9b27517d-03d7-421b-9875-86ed13c59563\") " Jan 31 04:10:18 crc kubenswrapper[4667]: I0131 04:10:18.946973 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b27517d-03d7-421b-9875-86ed13c59563-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9b27517d-03d7-421b-9875-86ed13c59563" (UID: "9b27517d-03d7-421b-9875-86ed13c59563"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:10:18 crc kubenswrapper[4667]: I0131 04:10:18.949474 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b27517d-03d7-421b-9875-86ed13c59563-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9b27517d-03d7-421b-9875-86ed13c59563" (UID: "9b27517d-03d7-421b-9875-86ed13c59563"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:10:18 crc kubenswrapper[4667]: I0131 04:10:18.977043 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b27517d-03d7-421b-9875-86ed13c59563-scripts" (OuterVolumeSpecName: "scripts") pod "9b27517d-03d7-421b-9875-86ed13c59563" (UID: "9b27517d-03d7-421b-9875-86ed13c59563"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:10:18 crc kubenswrapper[4667]: I0131 04:10:18.977543 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b27517d-03d7-421b-9875-86ed13c59563","Type":"ContainerDied","Data":"9820834efee43697b35ca81bd415e8e3cbeb4c7f1c29015d289143041b86c20c"} Jan 31 04:10:18 crc kubenswrapper[4667]: I0131 04:10:18.977724 4667 scope.go:117] "RemoveContainer" containerID="9dc32b1f9a21a019e1557d2ace63e4c2497b90006d960fca870dc76694dce474" Jan 31 04:10:18 crc kubenswrapper[4667]: I0131 04:10:18.977572 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:10:18 crc kubenswrapper[4667]: I0131 04:10:18.987084 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b27517d-03d7-421b-9875-86ed13c59563-kube-api-access-xbb2t" (OuterVolumeSpecName: "kube-api-access-xbb2t") pod "9b27517d-03d7-421b-9875-86ed13c59563" (UID: "9b27517d-03d7-421b-9875-86ed13c59563"). InnerVolumeSpecName "kube-api-access-xbb2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.007542 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="3b2ce69c-d98e-40a4-b6e9-be00a97ca5af" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://672a55b391cc5adf1f60292ca80ce2df2d1792ffb35906a85d4ff27725aac6b0" gracePeriod=30 Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.017408 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.580878201 podStartE2EDuration="7.01738394s" podCreationTimestamp="2026-01-31 04:10:12 +0000 UTC" firstStartedPulling="2026-01-31 04:10:14.093221106 +0000 UTC m=+1337.609556405" lastFinishedPulling="2026-01-31 04:10:18.529726825 +0000 UTC m=+1342.046062144" observedRunningTime="2026-01-31 04:10:19.007849267 +0000 UTC m=+1342.524184586" watchObservedRunningTime="2026-01-31 04:10:19.01738394 +0000 UTC m=+1342.533719239" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.020784 4667 scope.go:117] "RemoveContainer" containerID="1c2a6f7b79b61f75b9a16a2c9852fab43ecf31936bbecda700bc44263f232029" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.034229 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b27517d-03d7-421b-9875-86ed13c59563-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9b27517d-03d7-421b-9875-86ed13c59563" (UID: "9b27517d-03d7-421b-9875-86ed13c59563"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.045143 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.680801464 podStartE2EDuration="7.045118686s" podCreationTimestamp="2026-01-31 04:10:12 +0000 UTC" firstStartedPulling="2026-01-31 04:10:14.16304477 +0000 UTC m=+1337.679380069" lastFinishedPulling="2026-01-31 04:10:18.527361982 +0000 UTC m=+1342.043697291" observedRunningTime="2026-01-31 04:10:19.027043257 +0000 UTC m=+1342.543378556" watchObservedRunningTime="2026-01-31 04:10:19.045118686 +0000 UTC m=+1342.561453985" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.050564 4667 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b27517d-03d7-421b-9875-86ed13c59563-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.050696 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbb2t\" (UniqueName: \"kubernetes.io/projected/9b27517d-03d7-421b-9875-86ed13c59563-kube-api-access-xbb2t\") on node \"crc\" DevicePath \"\"" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.050775 4667 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b27517d-03d7-421b-9875-86ed13c59563-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.050833 4667 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b27517d-03d7-421b-9875-86ed13c59563-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.050920 4667 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b27517d-03d7-421b-9875-86ed13c59563-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.061196 4667 scope.go:117] "RemoveContainer" containerID="779f06c291a842c0ffd5794149f2483bdf9acd0d6ba5bcbeba838d6621234e66" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.108603 4667 scope.go:117] "RemoveContainer" containerID="5359333fcbac8539b304a6ff51ce6afee12f9e53301cbf046f93c10be46decf5" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.122084 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b27517d-03d7-421b-9875-86ed13c59563-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b27517d-03d7-421b-9875-86ed13c59563" (UID: "9b27517d-03d7-421b-9875-86ed13c59563"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.131735 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b27517d-03d7-421b-9875-86ed13c59563-config-data" (OuterVolumeSpecName: "config-data") pod "9b27517d-03d7-421b-9875-86ed13c59563" (UID: "9b27517d-03d7-421b-9875-86ed13c59563"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.153368 4667 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b27517d-03d7-421b-9875-86ed13c59563-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.153414 4667 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b27517d-03d7-421b-9875-86ed13c59563-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.328052 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.339757 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.390086 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:10:19 crc kubenswrapper[4667]: E0131 04:10:19.390638 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b27517d-03d7-421b-9875-86ed13c59563" containerName="ceilometer-central-agent" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.390659 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b27517d-03d7-421b-9875-86ed13c59563" containerName="ceilometer-central-agent" Jan 31 04:10:19 crc kubenswrapper[4667]: E0131 04:10:19.390678 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b27517d-03d7-421b-9875-86ed13c59563" containerName="ceilometer-notification-agent" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.390687 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b27517d-03d7-421b-9875-86ed13c59563" containerName="ceilometer-notification-agent" Jan 31 04:10:19 crc kubenswrapper[4667]: E0131 04:10:19.390709 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b27517d-03d7-421b-9875-86ed13c59563" containerName="sg-core" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.390716 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b27517d-03d7-421b-9875-86ed13c59563" containerName="sg-core" Jan 31 04:10:19 crc kubenswrapper[4667]: E0131 04:10:19.390727 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b27517d-03d7-421b-9875-86ed13c59563" containerName="proxy-httpd" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.390733 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b27517d-03d7-421b-9875-86ed13c59563" containerName="proxy-httpd" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.390928 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b27517d-03d7-421b-9875-86ed13c59563" containerName="ceilometer-central-agent" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.390947 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b27517d-03d7-421b-9875-86ed13c59563" containerName="proxy-httpd" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.390961 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b27517d-03d7-421b-9875-86ed13c59563" containerName="ceilometer-notification-agent" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.390969 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b27517d-03d7-421b-9875-86ed13c59563" containerName="sg-core" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.392880 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.395293 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.395574 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.395930 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.404205 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.472802 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f\") " pod="openstack/ceilometer-0" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.474178 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgmns\" (UniqueName: \"kubernetes.io/projected/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-kube-api-access-hgmns\") pod \"ceilometer-0\" (UID: \"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f\") " pod="openstack/ceilometer-0" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.474278 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-run-httpd\") pod \"ceilometer-0\" (UID: \"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f\") " pod="openstack/ceilometer-0" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.474402 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f\") " pod="openstack/ceilometer-0" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.474479 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-scripts\") pod \"ceilometer-0\" (UID: \"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f\") " pod="openstack/ceilometer-0" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.474549 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-config-data\") pod \"ceilometer-0\" (UID: \"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f\") " pod="openstack/ceilometer-0" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.474623 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-log-httpd\") pod \"ceilometer-0\" (UID: \"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f\") " pod="openstack/ceilometer-0" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.474728 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f\") " pod="openstack/ceilometer-0" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.577088 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-run-httpd\") pod \"ceilometer-0\" (UID: \"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f\") " pod="openstack/ceilometer-0" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.577173 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f\") " pod="openstack/ceilometer-0" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.577202 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-scripts\") pod \"ceilometer-0\" (UID: \"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f\") " pod="openstack/ceilometer-0" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.577222 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-config-data\") pod \"ceilometer-0\" (UID: \"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f\") " pod="openstack/ceilometer-0" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.577248 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-log-httpd\") pod \"ceilometer-0\" (UID: \"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f\") " pod="openstack/ceilometer-0" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.577292 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f\") " pod="openstack/ceilometer-0" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.577335 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f\") " pod="openstack/ceilometer-0" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.577395 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgmns\" (UniqueName: \"kubernetes.io/projected/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-kube-api-access-hgmns\") pod \"ceilometer-0\" (UID: \"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f\") " pod="openstack/ceilometer-0" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.578509 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-run-httpd\") pod \"ceilometer-0\" (UID: \"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f\") " pod="openstack/ceilometer-0" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.578854 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-log-httpd\") pod \"ceilometer-0\" (UID: \"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f\") " pod="openstack/ceilometer-0" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.587421 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f\") " pod="openstack/ceilometer-0" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.587532 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f\") " pod="openstack/ceilometer-0" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.588757 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-config-data\") pod \"ceilometer-0\" (UID: \"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f\") " pod="openstack/ceilometer-0" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.589433 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f\") " pod="openstack/ceilometer-0" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.602496 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-scripts\") pod \"ceilometer-0\" (UID: \"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f\") " pod="openstack/ceilometer-0" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.612450 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgmns\" (UniqueName: \"kubernetes.io/projected/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-kube-api-access-hgmns\") pod \"ceilometer-0\" (UID: \"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f\") " pod="openstack/ceilometer-0" Jan 31 04:10:19 crc kubenswrapper[4667]: I0131 04:10:19.728177 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:10:20 crc kubenswrapper[4667]: I0131 04:10:20.081040 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"66b568e7-a37b-4eae-b00c-4000f8a51517","Type":"ContainerStarted","Data":"7b395f386184379d45251c480cbbce06f7e24c1b4cdcacd464a232a752b3f9e1"} Jan 31 04:10:20 crc kubenswrapper[4667]: I0131 04:10:20.081421 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"66b568e7-a37b-4eae-b00c-4000f8a51517","Type":"ContainerStarted","Data":"6f27fb0d1e05e88ab0aa6caa957843ed3b2d86454219ff4c17528cd2337f0247"} Jan 31 04:10:20 crc kubenswrapper[4667]: I0131 04:10:20.081201 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="66b568e7-a37b-4eae-b00c-4000f8a51517" containerName="nova-metadata-log" containerID="cri-o://7b395f386184379d45251c480cbbce06f7e24c1b4cdcacd464a232a752b3f9e1" gracePeriod=30 Jan 31 04:10:20 crc kubenswrapper[4667]: I0131 04:10:20.081548 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="66b568e7-a37b-4eae-b00c-4000f8a51517" containerName="nova-metadata-metadata" containerID="cri-o://6f27fb0d1e05e88ab0aa6caa957843ed3b2d86454219ff4c17528cd2337f0247" gracePeriod=30 Jan 31 04:10:20 crc kubenswrapper[4667]: I0131 04:10:20.112616 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3b2ce69c-d98e-40a4-b6e9-be00a97ca5af","Type":"ContainerStarted","Data":"672a55b391cc5adf1f60292ca80ce2df2d1792ffb35906a85d4ff27725aac6b0"} Jan 31 04:10:20 crc kubenswrapper[4667]: I0131 04:10:20.124832 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e1b9d2c1-5919-4939-8523-445092cad2a8","Type":"ContainerStarted","Data":"906a92cab5d3cc7dd7786eb0130aeb1d2854f0cc3769c0e8172ead36ae122878"} Jan 31 04:10:20 crc kubenswrapper[4667]: I0131 04:10:20.124904 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e1b9d2c1-5919-4939-8523-445092cad2a8","Type":"ContainerStarted","Data":"8914066a25b5cde067722b64fd0e5e1853b49463dbc35ca70c4a15114d8cc9c3"} Jan 31 04:10:20 crc kubenswrapper[4667]: I0131 04:10:20.181427 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.791342427 podStartE2EDuration="9.181301486s" podCreationTimestamp="2026-01-31 04:10:11 +0000 UTC" firstStartedPulling="2026-01-31 04:10:13.132088732 +0000 UTC m=+1336.648424031" lastFinishedPulling="2026-01-31 04:10:18.522047771 +0000 UTC m=+1342.038383090" observedRunningTime="2026-01-31 04:10:20.145389963 +0000 UTC m=+1343.661725262" watchObservedRunningTime="2026-01-31 04:10:20.181301486 +0000 UTC m=+1343.697636785" Jan 31 04:10:20 crc kubenswrapper[4667]: I0131 04:10:20.192361 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f8e6271b-34bd-43ad-9505-c4eff960694a","Type":"ContainerStarted","Data":"2e31892dded3b6d96b9608da2963ad1f77da32c3b2e8093c0559ce612c5d6a7c"} Jan 31 04:10:20 crc kubenswrapper[4667]: I0131 04:10:20.200823 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.2202058319999995 podStartE2EDuration="9.200791083s" podCreationTimestamp="2026-01-31 04:10:11 +0000 UTC" firstStartedPulling="2026-01-31 04:10:13.541698526 +0000 UTC m=+1337.058033825" lastFinishedPulling="2026-01-31 04:10:18.522283767 +0000 UTC m=+1342.038619076" observedRunningTime="2026-01-31 04:10:20.190757567 +0000 UTC m=+1343.707092866" watchObservedRunningTime="2026-01-31 04:10:20.200791083 +0000 UTC m=+1343.717126382" Jan 31 04:10:20 crc kubenswrapper[4667]: I0131 04:10:20.360367 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.064343 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.152210 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66b568e7-a37b-4eae-b00c-4000f8a51517-config-data\") pod \"66b568e7-a37b-4eae-b00c-4000f8a51517\" (UID: \"66b568e7-a37b-4eae-b00c-4000f8a51517\") " Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.152317 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66b568e7-a37b-4eae-b00c-4000f8a51517-combined-ca-bundle\") pod \"66b568e7-a37b-4eae-b00c-4000f8a51517\" (UID: \"66b568e7-a37b-4eae-b00c-4000f8a51517\") " Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.152350 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66b568e7-a37b-4eae-b00c-4000f8a51517-logs\") pod \"66b568e7-a37b-4eae-b00c-4000f8a51517\" (UID: \"66b568e7-a37b-4eae-b00c-4000f8a51517\") " Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.152600 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rbrp\" (UniqueName: \"kubernetes.io/projected/66b568e7-a37b-4eae-b00c-4000f8a51517-kube-api-access-6rbrp\") pod \"66b568e7-a37b-4eae-b00c-4000f8a51517\" (UID: \"66b568e7-a37b-4eae-b00c-4000f8a51517\") " Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.153670 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66b568e7-a37b-4eae-b00c-4000f8a51517-logs" (OuterVolumeSpecName: "logs") pod "66b568e7-a37b-4eae-b00c-4000f8a51517" (UID: "66b568e7-a37b-4eae-b00c-4000f8a51517"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.164670 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66b568e7-a37b-4eae-b00c-4000f8a51517-kube-api-access-6rbrp" (OuterVolumeSpecName: "kube-api-access-6rbrp") pod "66b568e7-a37b-4eae-b00c-4000f8a51517" (UID: "66b568e7-a37b-4eae-b00c-4000f8a51517"). InnerVolumeSpecName "kube-api-access-6rbrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.197278 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66b568e7-a37b-4eae-b00c-4000f8a51517-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "66b568e7-a37b-4eae-b00c-4000f8a51517" (UID: "66b568e7-a37b-4eae-b00c-4000f8a51517"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.209110 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66b568e7-a37b-4eae-b00c-4000f8a51517-config-data" (OuterVolumeSpecName: "config-data") pod "66b568e7-a37b-4eae-b00c-4000f8a51517" (UID: "66b568e7-a37b-4eae-b00c-4000f8a51517"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.211706 4667 generic.go:334] "Generic (PLEG): container finished" podID="66b568e7-a37b-4eae-b00c-4000f8a51517" containerID="6f27fb0d1e05e88ab0aa6caa957843ed3b2d86454219ff4c17528cd2337f0247" exitCode=0 Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.211745 4667 generic.go:334] "Generic (PLEG): container finished" podID="66b568e7-a37b-4eae-b00c-4000f8a51517" containerID="7b395f386184379d45251c480cbbce06f7e24c1b4cdcacd464a232a752b3f9e1" exitCode=143 Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.211800 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"66b568e7-a37b-4eae-b00c-4000f8a51517","Type":"ContainerDied","Data":"6f27fb0d1e05e88ab0aa6caa957843ed3b2d86454219ff4c17528cd2337f0247"} Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.211850 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"66b568e7-a37b-4eae-b00c-4000f8a51517","Type":"ContainerDied","Data":"7b395f386184379d45251c480cbbce06f7e24c1b4cdcacd464a232a752b3f9e1"} Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.211885 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"66b568e7-a37b-4eae-b00c-4000f8a51517","Type":"ContainerDied","Data":"a46635cc1f871707ae61fa83c7a3c708af90a691ed1c6539a1a864cae10ed7e6"} Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.211904 4667 scope.go:117] "RemoveContainer" containerID="6f27fb0d1e05e88ab0aa6caa957843ed3b2d86454219ff4c17528cd2337f0247" Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.212610 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.220560 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f","Type":"ContainerStarted","Data":"97bd6bdf24e1610941c300633f3e1745cfcb8388f36367d7b75b7a2db4aee4fa"} Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.245116 4667 scope.go:117] "RemoveContainer" containerID="7b395f386184379d45251c480cbbce06f7e24c1b4cdcacd464a232a752b3f9e1" Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.258513 4667 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66b568e7-a37b-4eae-b00c-4000f8a51517-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.258566 4667 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66b568e7-a37b-4eae-b00c-4000f8a51517-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.258585 4667 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66b568e7-a37b-4eae-b00c-4000f8a51517-logs\") on node \"crc\" DevicePath \"\"" Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.258601 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rbrp\" (UniqueName: \"kubernetes.io/projected/66b568e7-a37b-4eae-b00c-4000f8a51517-kube-api-access-6rbrp\") on node \"crc\" DevicePath \"\"" Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.274652 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.287902 4667 scope.go:117] "RemoveContainer" containerID="6f27fb0d1e05e88ab0aa6caa957843ed3b2d86454219ff4c17528cd2337f0247" Jan 31 04:10:21 crc kubenswrapper[4667]: E0131 04:10:21.289619 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f27fb0d1e05e88ab0aa6caa957843ed3b2d86454219ff4c17528cd2337f0247\": container with ID starting with 6f27fb0d1e05e88ab0aa6caa957843ed3b2d86454219ff4c17528cd2337f0247 not found: ID does not exist" containerID="6f27fb0d1e05e88ab0aa6caa957843ed3b2d86454219ff4c17528cd2337f0247" Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.289679 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f27fb0d1e05e88ab0aa6caa957843ed3b2d86454219ff4c17528cd2337f0247"} err="failed to get container status \"6f27fb0d1e05e88ab0aa6caa957843ed3b2d86454219ff4c17528cd2337f0247\": rpc error: code = NotFound desc = could not find container \"6f27fb0d1e05e88ab0aa6caa957843ed3b2d86454219ff4c17528cd2337f0247\": container with ID starting with 6f27fb0d1e05e88ab0aa6caa957843ed3b2d86454219ff4c17528cd2337f0247 not found: ID does not exist" Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.289707 4667 scope.go:117] "RemoveContainer" containerID="7b395f386184379d45251c480cbbce06f7e24c1b4cdcacd464a232a752b3f9e1" Jan 31 04:10:21 crc kubenswrapper[4667]: E0131 04:10:21.290220 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b395f386184379d45251c480cbbce06f7e24c1b4cdcacd464a232a752b3f9e1\": container with ID starting with 7b395f386184379d45251c480cbbce06f7e24c1b4cdcacd464a232a752b3f9e1 not found: ID does not exist" containerID="7b395f386184379d45251c480cbbce06f7e24c1b4cdcacd464a232a752b3f9e1" Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.290261 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b395f386184379d45251c480cbbce06f7e24c1b4cdcacd464a232a752b3f9e1"} err="failed to get container status \"7b395f386184379d45251c480cbbce06f7e24c1b4cdcacd464a232a752b3f9e1\": rpc error: code = NotFound desc = could not find container \"7b395f386184379d45251c480cbbce06f7e24c1b4cdcacd464a232a752b3f9e1\": container with ID starting with 7b395f386184379d45251c480cbbce06f7e24c1b4cdcacd464a232a752b3f9e1 not found: ID does not exist" Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.290283 4667 scope.go:117] "RemoveContainer" containerID="6f27fb0d1e05e88ab0aa6caa957843ed3b2d86454219ff4c17528cd2337f0247" Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.291063 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f27fb0d1e05e88ab0aa6caa957843ed3b2d86454219ff4c17528cd2337f0247"} err="failed to get container status \"6f27fb0d1e05e88ab0aa6caa957843ed3b2d86454219ff4c17528cd2337f0247\": rpc error: code = NotFound desc = could not find container \"6f27fb0d1e05e88ab0aa6caa957843ed3b2d86454219ff4c17528cd2337f0247\": container with ID starting with 6f27fb0d1e05e88ab0aa6caa957843ed3b2d86454219ff4c17528cd2337f0247 not found: ID does not exist" Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.291090 4667 scope.go:117] "RemoveContainer" containerID="7b395f386184379d45251c480cbbce06f7e24c1b4cdcacd464a232a752b3f9e1" Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.294081 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b395f386184379d45251c480cbbce06f7e24c1b4cdcacd464a232a752b3f9e1"} err="failed to get container status \"7b395f386184379d45251c480cbbce06f7e24c1b4cdcacd464a232a752b3f9e1\": rpc error: code = NotFound desc = could not find container \"7b395f386184379d45251c480cbbce06f7e24c1b4cdcacd464a232a752b3f9e1\": container with ID starting with 7b395f386184379d45251c480cbbce06f7e24c1b4cdcacd464a232a752b3f9e1 not found: ID does not exist" Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.312045 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b27517d-03d7-421b-9875-86ed13c59563" path="/var/lib/kubelet/pods/9b27517d-03d7-421b-9875-86ed13c59563/volumes" Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.317718 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.317770 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 31 04:10:21 crc kubenswrapper[4667]: E0131 04:10:21.319823 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66b568e7-a37b-4eae-b00c-4000f8a51517" containerName="nova-metadata-metadata" Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.319896 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="66b568e7-a37b-4eae-b00c-4000f8a51517" containerName="nova-metadata-metadata" Jan 31 04:10:21 crc kubenswrapper[4667]: E0131 04:10:21.319921 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66b568e7-a37b-4eae-b00c-4000f8a51517" containerName="nova-metadata-log" Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.319930 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="66b568e7-a37b-4eae-b00c-4000f8a51517" containerName="nova-metadata-log" Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.320417 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="66b568e7-a37b-4eae-b00c-4000f8a51517" containerName="nova-metadata-metadata" Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.320454 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="66b568e7-a37b-4eae-b00c-4000f8a51517" containerName="nova-metadata-log" Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.324384 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.334984 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.335024 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.336346 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.362417 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3e1009d-834f-4fa9-90a7-771cce6f6558-config-data\") pod \"nova-metadata-0\" (UID: \"b3e1009d-834f-4fa9-90a7-771cce6f6558\") " pod="openstack/nova-metadata-0" Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.362564 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3e1009d-834f-4fa9-90a7-771cce6f6558-logs\") pod \"nova-metadata-0\" (UID: \"b3e1009d-834f-4fa9-90a7-771cce6f6558\") " pod="openstack/nova-metadata-0" Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.362590 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptvm2\" (UniqueName: \"kubernetes.io/projected/b3e1009d-834f-4fa9-90a7-771cce6f6558-kube-api-access-ptvm2\") pod \"nova-metadata-0\" (UID: \"b3e1009d-834f-4fa9-90a7-771cce6f6558\") " pod="openstack/nova-metadata-0" Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.362620 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3e1009d-834f-4fa9-90a7-771cce6f6558-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b3e1009d-834f-4fa9-90a7-771cce6f6558\") " pod="openstack/nova-metadata-0" Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.362646 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3e1009d-834f-4fa9-90a7-771cce6f6558-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b3e1009d-834f-4fa9-90a7-771cce6f6558\") " pod="openstack/nova-metadata-0" Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.464920 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3e1009d-834f-4fa9-90a7-771cce6f6558-config-data\") pod \"nova-metadata-0\" (UID: \"b3e1009d-834f-4fa9-90a7-771cce6f6558\") " pod="openstack/nova-metadata-0" Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.465067 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3e1009d-834f-4fa9-90a7-771cce6f6558-logs\") pod \"nova-metadata-0\" (UID: \"b3e1009d-834f-4fa9-90a7-771cce6f6558\") " pod="openstack/nova-metadata-0" Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.465100 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptvm2\" (UniqueName: \"kubernetes.io/projected/b3e1009d-834f-4fa9-90a7-771cce6f6558-kube-api-access-ptvm2\") pod \"nova-metadata-0\" (UID: \"b3e1009d-834f-4fa9-90a7-771cce6f6558\") " pod="openstack/nova-metadata-0" Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.465126 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3e1009d-834f-4fa9-90a7-771cce6f6558-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b3e1009d-834f-4fa9-90a7-771cce6f6558\") " pod="openstack/nova-metadata-0" Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.465155 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3e1009d-834f-4fa9-90a7-771cce6f6558-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b3e1009d-834f-4fa9-90a7-771cce6f6558\") " pod="openstack/nova-metadata-0" Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.466993 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3e1009d-834f-4fa9-90a7-771cce6f6558-logs\") pod \"nova-metadata-0\" (UID: \"b3e1009d-834f-4fa9-90a7-771cce6f6558\") " pod="openstack/nova-metadata-0" Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.471092 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3e1009d-834f-4fa9-90a7-771cce6f6558-config-data\") pod \"nova-metadata-0\" (UID: \"b3e1009d-834f-4fa9-90a7-771cce6f6558\") " pod="openstack/nova-metadata-0" Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.480082 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3e1009d-834f-4fa9-90a7-771cce6f6558-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b3e1009d-834f-4fa9-90a7-771cce6f6558\") " pod="openstack/nova-metadata-0" Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.481684 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3e1009d-834f-4fa9-90a7-771cce6f6558-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b3e1009d-834f-4fa9-90a7-771cce6f6558\") " pod="openstack/nova-metadata-0" Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.490780 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptvm2\" (UniqueName: \"kubernetes.io/projected/b3e1009d-834f-4fa9-90a7-771cce6f6558-kube-api-access-ptvm2\") pod \"nova-metadata-0\" (UID: \"b3e1009d-834f-4fa9-90a7-771cce6f6558\") " pod="openstack/nova-metadata-0" Jan 31 04:10:21 crc kubenswrapper[4667]: I0131 04:10:21.650013 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 04:10:22 crc kubenswrapper[4667]: I0131 04:10:22.234434 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f","Type":"ContainerStarted","Data":"86e9cca7895c69c2a106eaed4f81e2df2b707dbddf9b0b60c93de799023e38d9"} Jan 31 04:10:22 crc kubenswrapper[4667]: I0131 04:10:22.296472 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 04:10:22 crc kubenswrapper[4667]: I0131 04:10:22.447739 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 04:10:22 crc kubenswrapper[4667]: I0131 04:10:22.447815 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 04:10:22 crc kubenswrapper[4667]: I0131 04:10:22.476149 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-5mh27" Jan 31 04:10:22 crc kubenswrapper[4667]: I0131 04:10:22.579726 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-l6h6r"] Jan 31 04:10:22 crc kubenswrapper[4667]: I0131 04:10:22.586231 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-l6h6r" podUID="899eb625-8bc5-458d-88f4-cc8ccc4bd261" containerName="dnsmasq-dns" containerID="cri-o://ca6176e07daa5e1d1933f71e0f5f558937a550af6023da0d0e967c90fabf04a7" gracePeriod=10 Jan 31 04:10:22 crc kubenswrapper[4667]: I0131 04:10:22.862809 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 31 04:10:22 crc kubenswrapper[4667]: I0131 04:10:22.864969 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 31 04:10:22 crc kubenswrapper[4667]: I0131 04:10:22.939597 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 31 04:10:22 crc kubenswrapper[4667]: I0131 04:10:22.964336 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:10:23 crc kubenswrapper[4667]: I0131 04:10:23.244861 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b3e1009d-834f-4fa9-90a7-771cce6f6558","Type":"ContainerStarted","Data":"bffd0606db09d5cc8eab6a35764780ac1118e8006c84cd7847a99c18334fc7d3"} Jan 31 04:10:23 crc kubenswrapper[4667]: I0131 04:10:23.245331 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b3e1009d-834f-4fa9-90a7-771cce6f6558","Type":"ContainerStarted","Data":"8b4ce6f025ad16ddfb649e4b7ada4b62d478f354b7486ea7abf2a17c6c434df3"} Jan 31 04:10:23 crc kubenswrapper[4667]: I0131 04:10:23.247397 4667 generic.go:334] "Generic (PLEG): container finished" podID="899eb625-8bc5-458d-88f4-cc8ccc4bd261" containerID="ca6176e07daa5e1d1933f71e0f5f558937a550af6023da0d0e967c90fabf04a7" exitCode=0 Jan 31 04:10:23 crc kubenswrapper[4667]: I0131 04:10:23.247502 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-l6h6r" event={"ID":"899eb625-8bc5-458d-88f4-cc8ccc4bd261","Type":"ContainerDied","Data":"ca6176e07daa5e1d1933f71e0f5f558937a550af6023da0d0e967c90fabf04a7"} Jan 31 04:10:23 crc kubenswrapper[4667]: I0131 04:10:23.294153 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66b568e7-a37b-4eae-b00c-4000f8a51517" path="/var/lib/kubelet/pods/66b568e7-a37b-4eae-b00c-4000f8a51517/volumes" Jan 31 04:10:23 crc kubenswrapper[4667]: I0131 04:10:23.311269 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 31 04:10:23 crc kubenswrapper[4667]: I0131 04:10:23.532028 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e1b9d2c1-5919-4939-8523-445092cad2a8" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 04:10:23 crc kubenswrapper[4667]: I0131 04:10:23.532152 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e1b9d2c1-5919-4939-8523-445092cad2a8" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 04:10:24 crc kubenswrapper[4667]: I0131 04:10:24.082046 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-l6h6r" Jan 31 04:10:24 crc kubenswrapper[4667]: I0131 04:10:24.165259 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/899eb625-8bc5-458d-88f4-cc8ccc4bd261-config\") pod \"899eb625-8bc5-458d-88f4-cc8ccc4bd261\" (UID: \"899eb625-8bc5-458d-88f4-cc8ccc4bd261\") " Jan 31 04:10:24 crc kubenswrapper[4667]: I0131 04:10:24.165316 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/899eb625-8bc5-458d-88f4-cc8ccc4bd261-dns-swift-storage-0\") pod \"899eb625-8bc5-458d-88f4-cc8ccc4bd261\" (UID: \"899eb625-8bc5-458d-88f4-cc8ccc4bd261\") " Jan 31 04:10:24 crc kubenswrapper[4667]: I0131 04:10:24.165411 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/899eb625-8bc5-458d-88f4-cc8ccc4bd261-dns-svc\") pod \"899eb625-8bc5-458d-88f4-cc8ccc4bd261\" (UID: \"899eb625-8bc5-458d-88f4-cc8ccc4bd261\") " Jan 31 04:10:24 crc kubenswrapper[4667]: I0131 04:10:24.165595 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/899eb625-8bc5-458d-88f4-cc8ccc4bd261-ovsdbserver-sb\") pod \"899eb625-8bc5-458d-88f4-cc8ccc4bd261\" (UID: \"899eb625-8bc5-458d-88f4-cc8ccc4bd261\") " Jan 31 04:10:24 crc kubenswrapper[4667]: I0131 04:10:24.165657 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6gvc\" (UniqueName: \"kubernetes.io/projected/899eb625-8bc5-458d-88f4-cc8ccc4bd261-kube-api-access-w6gvc\") pod \"899eb625-8bc5-458d-88f4-cc8ccc4bd261\" (UID: \"899eb625-8bc5-458d-88f4-cc8ccc4bd261\") " Jan 31 04:10:24 crc kubenswrapper[4667]: I0131 04:10:24.187130 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/899eb625-8bc5-458d-88f4-cc8ccc4bd261-kube-api-access-w6gvc" (OuterVolumeSpecName: "kube-api-access-w6gvc") pod "899eb625-8bc5-458d-88f4-cc8ccc4bd261" (UID: "899eb625-8bc5-458d-88f4-cc8ccc4bd261"). InnerVolumeSpecName "kube-api-access-w6gvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:10:24 crc kubenswrapper[4667]: I0131 04:10:24.245917 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/899eb625-8bc5-458d-88f4-cc8ccc4bd261-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "899eb625-8bc5-458d-88f4-cc8ccc4bd261" (UID: "899eb625-8bc5-458d-88f4-cc8ccc4bd261"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:10:24 crc kubenswrapper[4667]: I0131 04:10:24.268732 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/899eb625-8bc5-458d-88f4-cc8ccc4bd261-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "899eb625-8bc5-458d-88f4-cc8ccc4bd261" (UID: "899eb625-8bc5-458d-88f4-cc8ccc4bd261"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:10:24 crc kubenswrapper[4667]: I0131 04:10:24.269141 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/899eb625-8bc5-458d-88f4-cc8ccc4bd261-ovsdbserver-nb\") pod \"899eb625-8bc5-458d-88f4-cc8ccc4bd261\" (UID: \"899eb625-8bc5-458d-88f4-cc8ccc4bd261\") " Jan 31 04:10:24 crc kubenswrapper[4667]: I0131 04:10:24.273899 4667 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/899eb625-8bc5-458d-88f4-cc8ccc4bd261-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 04:10:24 crc kubenswrapper[4667]: I0131 04:10:24.273931 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6gvc\" (UniqueName: \"kubernetes.io/projected/899eb625-8bc5-458d-88f4-cc8ccc4bd261-kube-api-access-w6gvc\") on node \"crc\" DevicePath \"\"" Jan 31 04:10:24 crc kubenswrapper[4667]: I0131 04:10:24.273942 4667 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/899eb625-8bc5-458d-88f4-cc8ccc4bd261-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 31 04:10:24 crc kubenswrapper[4667]: I0131 04:10:24.306511 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b3e1009d-834f-4fa9-90a7-771cce6f6558","Type":"ContainerStarted","Data":"5d6ada2012f45a45f5d61bc32e017bf4605b83ecd57aa522bb771d189f30b6d6"} Jan 31 04:10:24 crc kubenswrapper[4667]: I0131 04:10:24.313438 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/899eb625-8bc5-458d-88f4-cc8ccc4bd261-config" (OuterVolumeSpecName: "config") pod "899eb625-8bc5-458d-88f4-cc8ccc4bd261" (UID: "899eb625-8bc5-458d-88f4-cc8ccc4bd261"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:10:24 crc kubenswrapper[4667]: I0131 04:10:24.318193 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-l6h6r" event={"ID":"899eb625-8bc5-458d-88f4-cc8ccc4bd261","Type":"ContainerDied","Data":"43d90f16af7fa3a8aa01b601cf9dfb44aad45aea7c673c29b789c8f8a44a3cde"} Jan 31 04:10:24 crc kubenswrapper[4667]: I0131 04:10:24.318277 4667 scope.go:117] "RemoveContainer" containerID="ca6176e07daa5e1d1933f71e0f5f558937a550af6023da0d0e967c90fabf04a7" Jan 31 04:10:24 crc kubenswrapper[4667]: I0131 04:10:24.318576 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-l6h6r" Jan 31 04:10:24 crc kubenswrapper[4667]: I0131 04:10:24.320161 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/899eb625-8bc5-458d-88f4-cc8ccc4bd261-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "899eb625-8bc5-458d-88f4-cc8ccc4bd261" (UID: "899eb625-8bc5-458d-88f4-cc8ccc4bd261"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:10:24 crc kubenswrapper[4667]: I0131 04:10:24.352284 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f","Type":"ContainerStarted","Data":"15b01ff7de5eaf63c13a0855e5d28dc639ce10af2ea78aa019afbb0274e79bef"} Jan 31 04:10:24 crc kubenswrapper[4667]: I0131 04:10:24.365409 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/899eb625-8bc5-458d-88f4-cc8ccc4bd261-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "899eb625-8bc5-458d-88f4-cc8ccc4bd261" (UID: "899eb625-8bc5-458d-88f4-cc8ccc4bd261"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:10:24 crc kubenswrapper[4667]: I0131 04:10:24.379491 4667 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/899eb625-8bc5-458d-88f4-cc8ccc4bd261-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 04:10:24 crc kubenswrapper[4667]: I0131 04:10:24.379553 4667 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/899eb625-8bc5-458d-88f4-cc8ccc4bd261-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:10:24 crc kubenswrapper[4667]: I0131 04:10:24.379562 4667 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/899eb625-8bc5-458d-88f4-cc8ccc4bd261-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 04:10:24 crc kubenswrapper[4667]: I0131 04:10:24.386369 4667 scope.go:117] "RemoveContainer" containerID="5c1ff677289b4ede3000837cdef077d10eaa071ca64adc07ac9fa8a9270a5165" Jan 31 04:10:24 crc kubenswrapper[4667]: I0131 04:10:24.812993 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.812965295 podStartE2EDuration="3.812965295s" podCreationTimestamp="2026-01-31 04:10:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:10:24.336313282 +0000 UTC m=+1347.852648581" watchObservedRunningTime="2026-01-31 04:10:24.812965295 +0000 UTC m=+1348.329300594" Jan 31 04:10:24 crc kubenswrapper[4667]: I0131 04:10:24.827020 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-l6h6r"] Jan 31 04:10:24 crc kubenswrapper[4667]: I0131 04:10:24.866454 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-l6h6r"] Jan 31 04:10:25 crc kubenswrapper[4667]: I0131 04:10:25.302270 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="899eb625-8bc5-458d-88f4-cc8ccc4bd261" path="/var/lib/kubelet/pods/899eb625-8bc5-458d-88f4-cc8ccc4bd261/volumes" Jan 31 04:10:25 crc kubenswrapper[4667]: I0131 04:10:25.366529 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f","Type":"ContainerStarted","Data":"6ca3b1af75834ef27fb2703bbd239f425c7417dbab5a59bd3150a8e71bd993e9"} Jan 31 04:10:26 crc kubenswrapper[4667]: I0131 04:10:26.650389 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 04:10:26 crc kubenswrapper[4667]: I0131 04:10:26.650940 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 04:10:26 crc kubenswrapper[4667]: I0131 04:10:26.760311 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-78789d8f44-5trmc" podUID="b7f8fd18-06a0-432e-8c17-c9b432b6ca69" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 04:10:26 crc kubenswrapper[4667]: I0131 04:10:26.760439 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-78789d8f44-5trmc" Jan 31 04:10:26 crc kubenswrapper[4667]: I0131 04:10:26.761597 4667 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"856c0d14a9c006eba9b5acda21554d0a1e3d38398546c6f05a23d35e0977b245"} pod="openstack/horizon-78789d8f44-5trmc" containerMessage="Container horizon failed startup probe, will be restarted" Jan 31 04:10:26 crc kubenswrapper[4667]: I0131 04:10:26.761648 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-78789d8f44-5trmc" podUID="b7f8fd18-06a0-432e-8c17-c9b432b6ca69" containerName="horizon" containerID="cri-o://856c0d14a9c006eba9b5acda21554d0a1e3d38398546c6f05a23d35e0977b245" gracePeriod=30 Jan 31 04:10:26 crc kubenswrapper[4667]: I0131 04:10:26.886119 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-86c748c4d6-2grmh" podUID="c6974567-3bea-447a-bb8b-ced22b6d34ce" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 04:10:26 crc kubenswrapper[4667]: I0131 04:10:26.886240 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-86c748c4d6-2grmh" Jan 31 04:10:26 crc kubenswrapper[4667]: I0131 04:10:26.887279 4667 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"4d94eb28096da9efc7dc4e1a7ab99c87543d8b142e7c7ff1698b7c7d17eb3cc0"} pod="openstack/horizon-86c748c4d6-2grmh" containerMessage="Container horizon failed startup probe, will be restarted" Jan 31 04:10:26 crc kubenswrapper[4667]: I0131 04:10:26.887328 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-86c748c4d6-2grmh" podUID="c6974567-3bea-447a-bb8b-ced22b6d34ce" containerName="horizon" containerID="cri-o://4d94eb28096da9efc7dc4e1a7ab99c87543d8b142e7c7ff1698b7c7d17eb3cc0" gracePeriod=30 Jan 31 04:10:28 crc kubenswrapper[4667]: I0131 04:10:28.475625 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f","Type":"ContainerStarted","Data":"c84a5c48601f0984c2480dee8470e96b21aa2471e1eafac74da7ecc786ffbc25"} Jan 31 04:10:28 crc kubenswrapper[4667]: I0131 04:10:28.476140 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 04:10:28 crc kubenswrapper[4667]: I0131 04:10:28.485919 4667 generic.go:334] "Generic (PLEG): container finished" podID="87224b26-43eb-4712-bef1-050a0653fb28" containerID="38f08cd67983bd4889429f7af6614e1b16f81cb85fa950741834b9eed02dbd53" exitCode=0 Jan 31 04:10:28 crc kubenswrapper[4667]: I0131 04:10:28.485974 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ggfpz" event={"ID":"87224b26-43eb-4712-bef1-050a0653fb28","Type":"ContainerDied","Data":"38f08cd67983bd4889429f7af6614e1b16f81cb85fa950741834b9eed02dbd53"} Jan 31 04:10:28 crc kubenswrapper[4667]: I0131 04:10:28.513409 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.737933337 podStartE2EDuration="9.513360823s" podCreationTimestamp="2026-01-31 04:10:19 +0000 UTC" firstStartedPulling="2026-01-31 04:10:20.356353503 +0000 UTC m=+1343.872688802" lastFinishedPulling="2026-01-31 04:10:27.131780989 +0000 UTC m=+1350.648116288" observedRunningTime="2026-01-31 04:10:28.502366922 +0000 UTC m=+1352.018702241" watchObservedRunningTime="2026-01-31 04:10:28.513360823 +0000 UTC m=+1352.029696162" Jan 31 04:10:29 crc kubenswrapper[4667]: I0131 04:10:29.898130 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ggfpz" Jan 31 04:10:29 crc kubenswrapper[4667]: I0131 04:10:29.921114 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87224b26-43eb-4712-bef1-050a0653fb28-config-data\") pod \"87224b26-43eb-4712-bef1-050a0653fb28\" (UID: \"87224b26-43eb-4712-bef1-050a0653fb28\") " Jan 31 04:10:29 crc kubenswrapper[4667]: I0131 04:10:29.921433 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87224b26-43eb-4712-bef1-050a0653fb28-combined-ca-bundle\") pod \"87224b26-43eb-4712-bef1-050a0653fb28\" (UID: \"87224b26-43eb-4712-bef1-050a0653fb28\") " Jan 31 04:10:29 crc kubenswrapper[4667]: I0131 04:10:29.921544 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpp7s\" (UniqueName: \"kubernetes.io/projected/87224b26-43eb-4712-bef1-050a0653fb28-kube-api-access-xpp7s\") pod \"87224b26-43eb-4712-bef1-050a0653fb28\" (UID: \"87224b26-43eb-4712-bef1-050a0653fb28\") " Jan 31 04:10:29 crc kubenswrapper[4667]: I0131 04:10:29.921621 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87224b26-43eb-4712-bef1-050a0653fb28-scripts\") pod \"87224b26-43eb-4712-bef1-050a0653fb28\" (UID: \"87224b26-43eb-4712-bef1-050a0653fb28\") " Jan 31 04:10:29 crc kubenswrapper[4667]: I0131 04:10:29.929043 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87224b26-43eb-4712-bef1-050a0653fb28-scripts" (OuterVolumeSpecName: "scripts") pod "87224b26-43eb-4712-bef1-050a0653fb28" (UID: "87224b26-43eb-4712-bef1-050a0653fb28"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:10:29 crc kubenswrapper[4667]: I0131 04:10:29.943400 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87224b26-43eb-4712-bef1-050a0653fb28-kube-api-access-xpp7s" (OuterVolumeSpecName: "kube-api-access-xpp7s") pod "87224b26-43eb-4712-bef1-050a0653fb28" (UID: "87224b26-43eb-4712-bef1-050a0653fb28"). InnerVolumeSpecName "kube-api-access-xpp7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:10:30 crc kubenswrapper[4667]: I0131 04:10:30.026814 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpp7s\" (UniqueName: \"kubernetes.io/projected/87224b26-43eb-4712-bef1-050a0653fb28-kube-api-access-xpp7s\") on node \"crc\" DevicePath \"\"" Jan 31 04:10:30 crc kubenswrapper[4667]: I0131 04:10:30.027142 4667 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87224b26-43eb-4712-bef1-050a0653fb28-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:10:30 crc kubenswrapper[4667]: I0131 04:10:30.038132 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87224b26-43eb-4712-bef1-050a0653fb28-config-data" (OuterVolumeSpecName: "config-data") pod "87224b26-43eb-4712-bef1-050a0653fb28" (UID: "87224b26-43eb-4712-bef1-050a0653fb28"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:10:30 crc kubenswrapper[4667]: I0131 04:10:30.051485 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87224b26-43eb-4712-bef1-050a0653fb28-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87224b26-43eb-4712-bef1-050a0653fb28" (UID: "87224b26-43eb-4712-bef1-050a0653fb28"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:10:30 crc kubenswrapper[4667]: I0131 04:10:30.129251 4667 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87224b26-43eb-4712-bef1-050a0653fb28-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:10:30 crc kubenswrapper[4667]: I0131 04:10:30.129295 4667 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87224b26-43eb-4712-bef1-050a0653fb28-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:10:30 crc kubenswrapper[4667]: I0131 04:10:30.523395 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ggfpz" event={"ID":"87224b26-43eb-4712-bef1-050a0653fb28","Type":"ContainerDied","Data":"1b424600dcd6e8c5d0b6af4339dd1cf081319bc24abd475fce83207664fd5fec"} Jan 31 04:10:30 crc kubenswrapper[4667]: I0131 04:10:30.523455 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b424600dcd6e8c5d0b6af4339dd1cf081319bc24abd475fce83207664fd5fec" Jan 31 04:10:30 crc kubenswrapper[4667]: I0131 04:10:30.523487 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ggfpz" Jan 31 04:10:30 crc kubenswrapper[4667]: I0131 04:10:30.526255 4667 generic.go:334] "Generic (PLEG): container finished" podID="ed98e28a-5baf-4f7a-aafb-b03916785619" containerID="56a297688de57dc27497a4f04bd410ffa1654c2e35b421ce26c69749805b19b3" exitCode=0 Jan 31 04:10:30 crc kubenswrapper[4667]: I0131 04:10:30.526303 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-w94fr" event={"ID":"ed98e28a-5baf-4f7a-aafb-b03916785619","Type":"ContainerDied","Data":"56a297688de57dc27497a4f04bd410ffa1654c2e35b421ce26c69749805b19b3"} Jan 31 04:10:30 crc kubenswrapper[4667]: I0131 04:10:30.744327 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 04:10:30 crc kubenswrapper[4667]: I0131 04:10:30.744631 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e1b9d2c1-5919-4939-8523-445092cad2a8" containerName="nova-api-log" containerID="cri-o://906a92cab5d3cc7dd7786eb0130aeb1d2854f0cc3769c0e8172ead36ae122878" gracePeriod=30 Jan 31 04:10:30 crc kubenswrapper[4667]: I0131 04:10:30.744784 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e1b9d2c1-5919-4939-8523-445092cad2a8" containerName="nova-api-api" containerID="cri-o://8914066a25b5cde067722b64fd0e5e1853b49463dbc35ca70c4a15114d8cc9c3" gracePeriod=30 Jan 31 04:10:30 crc kubenswrapper[4667]: I0131 04:10:30.793426 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 04:10:30 crc kubenswrapper[4667]: I0131 04:10:30.793925 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="f8e6271b-34bd-43ad-9505-c4eff960694a" containerName="nova-scheduler-scheduler" containerID="cri-o://2e31892dded3b6d96b9608da2963ad1f77da32c3b2e8093c0559ce612c5d6a7c" gracePeriod=30 Jan 31 04:10:30 crc kubenswrapper[4667]: I0131 04:10:30.813024 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 04:10:30 crc kubenswrapper[4667]: I0131 04:10:30.813319 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b3e1009d-834f-4fa9-90a7-771cce6f6558" containerName="nova-metadata-log" containerID="cri-o://bffd0606db09d5cc8eab6a35764780ac1118e8006c84cd7847a99c18334fc7d3" gracePeriod=30 Jan 31 04:10:30 crc kubenswrapper[4667]: I0131 04:10:30.813511 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b3e1009d-834f-4fa9-90a7-771cce6f6558" containerName="nova-metadata-metadata" containerID="cri-o://5d6ada2012f45a45f5d61bc32e017bf4605b83ecd57aa522bb771d189f30b6d6" gracePeriod=30 Jan 31 04:10:31 crc kubenswrapper[4667]: I0131 04:10:31.577026 4667 generic.go:334] "Generic (PLEG): container finished" podID="c6974567-3bea-447a-bb8b-ced22b6d34ce" containerID="4d94eb28096da9efc7dc4e1a7ab99c87543d8b142e7c7ff1698b7c7d17eb3cc0" exitCode=0 Jan 31 04:10:31 crc kubenswrapper[4667]: I0131 04:10:31.577251 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86c748c4d6-2grmh" event={"ID":"c6974567-3bea-447a-bb8b-ced22b6d34ce","Type":"ContainerDied","Data":"4d94eb28096da9efc7dc4e1a7ab99c87543d8b142e7c7ff1698b7c7d17eb3cc0"} Jan 31 04:10:31 crc kubenswrapper[4667]: I0131 04:10:31.577590 4667 scope.go:117] "RemoveContainer" containerID="8585ef04e351d14473c07be1275ec2c6840212275304d32bbdccbfc70cb910c8" Jan 31 04:10:31 crc kubenswrapper[4667]: I0131 04:10:31.611682 4667 generic.go:334] "Generic (PLEG): container finished" podID="b3e1009d-834f-4fa9-90a7-771cce6f6558" containerID="5d6ada2012f45a45f5d61bc32e017bf4605b83ecd57aa522bb771d189f30b6d6" exitCode=0 Jan 31 04:10:31 crc kubenswrapper[4667]: I0131 04:10:31.611727 4667 generic.go:334] "Generic (PLEG): container finished" podID="b3e1009d-834f-4fa9-90a7-771cce6f6558" containerID="bffd0606db09d5cc8eab6a35764780ac1118e8006c84cd7847a99c18334fc7d3" exitCode=143 Jan 31 04:10:31 crc kubenswrapper[4667]: I0131 04:10:31.611740 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b3e1009d-834f-4fa9-90a7-771cce6f6558","Type":"ContainerDied","Data":"5d6ada2012f45a45f5d61bc32e017bf4605b83ecd57aa522bb771d189f30b6d6"} Jan 31 04:10:31 crc kubenswrapper[4667]: I0131 04:10:31.611817 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b3e1009d-834f-4fa9-90a7-771cce6f6558","Type":"ContainerDied","Data":"bffd0606db09d5cc8eab6a35764780ac1118e8006c84cd7847a99c18334fc7d3"} Jan 31 04:10:31 crc kubenswrapper[4667]: I0131 04:10:31.642170 4667 generic.go:334] "Generic (PLEG): container finished" podID="b7f8fd18-06a0-432e-8c17-c9b432b6ca69" containerID="856c0d14a9c006eba9b5acda21554d0a1e3d38398546c6f05a23d35e0977b245" exitCode=0 Jan 31 04:10:31 crc kubenswrapper[4667]: I0131 04:10:31.642287 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78789d8f44-5trmc" event={"ID":"b7f8fd18-06a0-432e-8c17-c9b432b6ca69","Type":"ContainerDied","Data":"856c0d14a9c006eba9b5acda21554d0a1e3d38398546c6f05a23d35e0977b245"} Jan 31 04:10:31 crc kubenswrapper[4667]: I0131 04:10:31.663281 4667 generic.go:334] "Generic (PLEG): container finished" podID="e1b9d2c1-5919-4939-8523-445092cad2a8" containerID="906a92cab5d3cc7dd7786eb0130aeb1d2854f0cc3769c0e8172ead36ae122878" exitCode=143 Jan 31 04:10:31 crc kubenswrapper[4667]: I0131 04:10:31.663381 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e1b9d2c1-5919-4939-8523-445092cad2a8","Type":"ContainerDied","Data":"906a92cab5d3cc7dd7786eb0130aeb1d2854f0cc3769c0e8172ead36ae122878"} Jan 31 04:10:31 crc kubenswrapper[4667]: I0131 04:10:31.837263 4667 scope.go:117] "RemoveContainer" containerID="d51854ff784d64b2b3584b6cdda45491a29c7d1089ddf69708469cfc6e98fccc" Jan 31 04:10:31 crc kubenswrapper[4667]: I0131 04:10:31.926988 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.083227 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptvm2\" (UniqueName: \"kubernetes.io/projected/b3e1009d-834f-4fa9-90a7-771cce6f6558-kube-api-access-ptvm2\") pod \"b3e1009d-834f-4fa9-90a7-771cce6f6558\" (UID: \"b3e1009d-834f-4fa9-90a7-771cce6f6558\") " Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.083590 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3e1009d-834f-4fa9-90a7-771cce6f6558-config-data\") pod \"b3e1009d-834f-4fa9-90a7-771cce6f6558\" (UID: \"b3e1009d-834f-4fa9-90a7-771cce6f6558\") " Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.083665 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3e1009d-834f-4fa9-90a7-771cce6f6558-combined-ca-bundle\") pod \"b3e1009d-834f-4fa9-90a7-771cce6f6558\" (UID: \"b3e1009d-834f-4fa9-90a7-771cce6f6558\") " Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.083699 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3e1009d-834f-4fa9-90a7-771cce6f6558-nova-metadata-tls-certs\") pod \"b3e1009d-834f-4fa9-90a7-771cce6f6558\" (UID: \"b3e1009d-834f-4fa9-90a7-771cce6f6558\") " Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.083732 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3e1009d-834f-4fa9-90a7-771cce6f6558-logs\") pod \"b3e1009d-834f-4fa9-90a7-771cce6f6558\" (UID: \"b3e1009d-834f-4fa9-90a7-771cce6f6558\") " Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.084634 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3e1009d-834f-4fa9-90a7-771cce6f6558-logs" (OuterVolumeSpecName: "logs") pod "b3e1009d-834f-4fa9-90a7-771cce6f6558" (UID: "b3e1009d-834f-4fa9-90a7-771cce6f6558"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.109222 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3e1009d-834f-4fa9-90a7-771cce6f6558-kube-api-access-ptvm2" (OuterVolumeSpecName: "kube-api-access-ptvm2") pod "b3e1009d-834f-4fa9-90a7-771cce6f6558" (UID: "b3e1009d-834f-4fa9-90a7-771cce6f6558"). InnerVolumeSpecName "kube-api-access-ptvm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.144099 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3e1009d-834f-4fa9-90a7-771cce6f6558-config-data" (OuterVolumeSpecName: "config-data") pod "b3e1009d-834f-4fa9-90a7-771cce6f6558" (UID: "b3e1009d-834f-4fa9-90a7-771cce6f6558"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.160139 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3e1009d-834f-4fa9-90a7-771cce6f6558-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "b3e1009d-834f-4fa9-90a7-771cce6f6558" (UID: "b3e1009d-834f-4fa9-90a7-771cce6f6558"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.166509 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3e1009d-834f-4fa9-90a7-771cce6f6558-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3e1009d-834f-4fa9-90a7-771cce6f6558" (UID: "b3e1009d-834f-4fa9-90a7-771cce6f6558"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.186073 4667 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3e1009d-834f-4fa9-90a7-771cce6f6558-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.186116 4667 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3e1009d-834f-4fa9-90a7-771cce6f6558-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.186133 4667 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3e1009d-834f-4fa9-90a7-771cce6f6558-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.186145 4667 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3e1009d-834f-4fa9-90a7-771cce6f6558-logs\") on node \"crc\" DevicePath \"\"" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.186155 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptvm2\" (UniqueName: \"kubernetes.io/projected/b3e1009d-834f-4fa9-90a7-771cce6f6558-kube-api-access-ptvm2\") on node \"crc\" DevicePath \"\"" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.251917 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-w94fr" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.392010 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed98e28a-5baf-4f7a-aafb-b03916785619-config-data\") pod \"ed98e28a-5baf-4f7a-aafb-b03916785619\" (UID: \"ed98e28a-5baf-4f7a-aafb-b03916785619\") " Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.392126 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zw9c8\" (UniqueName: \"kubernetes.io/projected/ed98e28a-5baf-4f7a-aafb-b03916785619-kube-api-access-zw9c8\") pod \"ed98e28a-5baf-4f7a-aafb-b03916785619\" (UID: \"ed98e28a-5baf-4f7a-aafb-b03916785619\") " Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.392154 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed98e28a-5baf-4f7a-aafb-b03916785619-scripts\") pod \"ed98e28a-5baf-4f7a-aafb-b03916785619\" (UID: \"ed98e28a-5baf-4f7a-aafb-b03916785619\") " Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.392198 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed98e28a-5baf-4f7a-aafb-b03916785619-combined-ca-bundle\") pod \"ed98e28a-5baf-4f7a-aafb-b03916785619\" (UID: \"ed98e28a-5baf-4f7a-aafb-b03916785619\") " Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.396096 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed98e28a-5baf-4f7a-aafb-b03916785619-kube-api-access-zw9c8" (OuterVolumeSpecName: "kube-api-access-zw9c8") pod "ed98e28a-5baf-4f7a-aafb-b03916785619" (UID: "ed98e28a-5baf-4f7a-aafb-b03916785619"). InnerVolumeSpecName "kube-api-access-zw9c8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.400831 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed98e28a-5baf-4f7a-aafb-b03916785619-scripts" (OuterVolumeSpecName: "scripts") pod "ed98e28a-5baf-4f7a-aafb-b03916785619" (UID: "ed98e28a-5baf-4f7a-aafb-b03916785619"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.425516 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed98e28a-5baf-4f7a-aafb-b03916785619-config-data" (OuterVolumeSpecName: "config-data") pod "ed98e28a-5baf-4f7a-aafb-b03916785619" (UID: "ed98e28a-5baf-4f7a-aafb-b03916785619"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.427176 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed98e28a-5baf-4f7a-aafb-b03916785619-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed98e28a-5baf-4f7a-aafb-b03916785619" (UID: "ed98e28a-5baf-4f7a-aafb-b03916785619"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.495689 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zw9c8\" (UniqueName: \"kubernetes.io/projected/ed98e28a-5baf-4f7a-aafb-b03916785619-kube-api-access-zw9c8\") on node \"crc\" DevicePath \"\"" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.495855 4667 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed98e28a-5baf-4f7a-aafb-b03916785619-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.495905 4667 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed98e28a-5baf-4f7a-aafb-b03916785619-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.495920 4667 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed98e28a-5baf-4f7a-aafb-b03916785619-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.674535 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-w94fr" event={"ID":"ed98e28a-5baf-4f7a-aafb-b03916785619","Type":"ContainerDied","Data":"ea20f5b3f3a8de95ec11eef00cbf83d7de8cf6b9486e0c2a50b06ff7d9dd5e1f"} Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.674588 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea20f5b3f3a8de95ec11eef00cbf83d7de8cf6b9486e0c2a50b06ff7d9dd5e1f" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.674687 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-w94fr" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.677319 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86c748c4d6-2grmh" event={"ID":"c6974567-3bea-447a-bb8b-ced22b6d34ce","Type":"ContainerStarted","Data":"6f9ac7692b173b0244a378b7e16684ea3770a94bdaa3975e7a01eb055696e653"} Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.680152 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78789d8f44-5trmc" event={"ID":"b7f8fd18-06a0-432e-8c17-c9b432b6ca69","Type":"ContainerStarted","Data":"d09258b8f1e4532ac1a5b7d64767a2c240653c4e63ff54849c8729b313b3f8c4"} Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.682813 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b3e1009d-834f-4fa9-90a7-771cce6f6558","Type":"ContainerDied","Data":"8b4ce6f025ad16ddfb649e4b7ada4b62d478f354b7486ea7abf2a17c6c434df3"} Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.682980 4667 scope.go:117] "RemoveContainer" containerID="5d6ada2012f45a45f5d61bc32e017bf4605b83ecd57aa522bb771d189f30b6d6" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.683083 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.764075 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 31 04:10:32 crc kubenswrapper[4667]: E0131 04:10:32.768181 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87224b26-43eb-4712-bef1-050a0653fb28" containerName="nova-manage" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.768212 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="87224b26-43eb-4712-bef1-050a0653fb28" containerName="nova-manage" Jan 31 04:10:32 crc kubenswrapper[4667]: E0131 04:10:32.768242 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="899eb625-8bc5-458d-88f4-cc8ccc4bd261" containerName="dnsmasq-dns" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.768249 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="899eb625-8bc5-458d-88f4-cc8ccc4bd261" containerName="dnsmasq-dns" Jan 31 04:10:32 crc kubenswrapper[4667]: E0131 04:10:32.768294 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3e1009d-834f-4fa9-90a7-771cce6f6558" containerName="nova-metadata-log" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.768302 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3e1009d-834f-4fa9-90a7-771cce6f6558" containerName="nova-metadata-log" Jan 31 04:10:32 crc kubenswrapper[4667]: E0131 04:10:32.768311 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed98e28a-5baf-4f7a-aafb-b03916785619" containerName="nova-cell1-conductor-db-sync" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.768318 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed98e28a-5baf-4f7a-aafb-b03916785619" containerName="nova-cell1-conductor-db-sync" Jan 31 04:10:32 crc kubenswrapper[4667]: E0131 04:10:32.768339 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3e1009d-834f-4fa9-90a7-771cce6f6558" containerName="nova-metadata-metadata" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.768351 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3e1009d-834f-4fa9-90a7-771cce6f6558" containerName="nova-metadata-metadata" Jan 31 04:10:32 crc kubenswrapper[4667]: E0131 04:10:32.768374 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="899eb625-8bc5-458d-88f4-cc8ccc4bd261" containerName="init" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.768380 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="899eb625-8bc5-458d-88f4-cc8ccc4bd261" containerName="init" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.768828 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="899eb625-8bc5-458d-88f4-cc8ccc4bd261" containerName="dnsmasq-dns" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.768872 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3e1009d-834f-4fa9-90a7-771cce6f6558" containerName="nova-metadata-log" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.768915 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3e1009d-834f-4fa9-90a7-771cce6f6558" containerName="nova-metadata-metadata" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.768937 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="87224b26-43eb-4712-bef1-050a0653fb28" containerName="nova-manage" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.768950 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed98e28a-5baf-4f7a-aafb-b03916785619" containerName="nova-cell1-conductor-db-sync" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.775253 4667 scope.go:117] "RemoveContainer" containerID="bffd0606db09d5cc8eab6a35764780ac1118e8006c84cd7847a99c18334fc7d3" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.783357 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.786159 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.855575 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 31 04:10:32 crc kubenswrapper[4667]: E0131 04:10:32.890944 4667 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2e31892dded3b6d96b9608da2963ad1f77da32c3b2e8093c0559ce612c5d6a7c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 31 04:10:32 crc kubenswrapper[4667]: E0131 04:10:32.898938 4667 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2e31892dded3b6d96b9608da2963ad1f77da32c3b2e8093c0559ce612c5d6a7c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 31 04:10:32 crc kubenswrapper[4667]: E0131 04:10:32.905909 4667 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2e31892dded3b6d96b9608da2963ad1f77da32c3b2e8093c0559ce612c5d6a7c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 31 04:10:32 crc kubenswrapper[4667]: E0131 04:10:32.905971 4667 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="f8e6271b-34bd-43ad-9505-c4eff960694a" containerName="nova-scheduler-scheduler" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.923895 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z8jq\" (UniqueName: \"kubernetes.io/projected/172b5953-ebb3-4eae-b8ee-33d59574f2ac-kube-api-access-8z8jq\") pod \"nova-cell1-conductor-0\" (UID: \"172b5953-ebb3-4eae-b8ee-33d59574f2ac\") " pod="openstack/nova-cell1-conductor-0" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.924084 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/172b5953-ebb3-4eae-b8ee-33d59574f2ac-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"172b5953-ebb3-4eae-b8ee-33d59574f2ac\") " pod="openstack/nova-cell1-conductor-0" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.924598 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/172b5953-ebb3-4eae-b8ee-33d59574f2ac-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"172b5953-ebb3-4eae-b8ee-33d59574f2ac\") " pod="openstack/nova-cell1-conductor-0" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.924702 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.936828 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.950577 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.955868 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.959833 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.960178 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 31 04:10:32 crc kubenswrapper[4667]: I0131 04:10:32.965809 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 04:10:33 crc kubenswrapper[4667]: I0131 04:10:33.026669 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/172b5953-ebb3-4eae-b8ee-33d59574f2ac-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"172b5953-ebb3-4eae-b8ee-33d59574f2ac\") " pod="openstack/nova-cell1-conductor-0" Jan 31 04:10:33 crc kubenswrapper[4667]: I0131 04:10:33.026750 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc940175-88a1-4c91-bb97-1c72a27560b7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cc940175-88a1-4c91-bb97-1c72a27560b7\") " pod="openstack/nova-metadata-0" Jan 31 04:10:33 crc kubenswrapper[4667]: I0131 04:10:33.026795 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc940175-88a1-4c91-bb97-1c72a27560b7-logs\") pod \"nova-metadata-0\" (UID: \"cc940175-88a1-4c91-bb97-1c72a27560b7\") " pod="openstack/nova-metadata-0" Jan 31 04:10:33 crc kubenswrapper[4667]: I0131 04:10:33.026816 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc940175-88a1-4c91-bb97-1c72a27560b7-config-data\") pod \"nova-metadata-0\" (UID: \"cc940175-88a1-4c91-bb97-1c72a27560b7\") " pod="openstack/nova-metadata-0" Jan 31 04:10:33 crc kubenswrapper[4667]: I0131 04:10:33.026861 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc940175-88a1-4c91-bb97-1c72a27560b7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cc940175-88a1-4c91-bb97-1c72a27560b7\") " pod="openstack/nova-metadata-0" Jan 31 04:10:33 crc kubenswrapper[4667]: I0131 04:10:33.027100 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z8jq\" (UniqueName: \"kubernetes.io/projected/172b5953-ebb3-4eae-b8ee-33d59574f2ac-kube-api-access-8z8jq\") pod \"nova-cell1-conductor-0\" (UID: \"172b5953-ebb3-4eae-b8ee-33d59574f2ac\") " pod="openstack/nova-cell1-conductor-0" Jan 31 04:10:33 crc kubenswrapper[4667]: I0131 04:10:33.027569 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/172b5953-ebb3-4eae-b8ee-33d59574f2ac-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"172b5953-ebb3-4eae-b8ee-33d59574f2ac\") " pod="openstack/nova-cell1-conductor-0" Jan 31 04:10:33 crc kubenswrapper[4667]: I0131 04:10:33.027687 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tcx7\" (UniqueName: \"kubernetes.io/projected/cc940175-88a1-4c91-bb97-1c72a27560b7-kube-api-access-8tcx7\") pod \"nova-metadata-0\" (UID: \"cc940175-88a1-4c91-bb97-1c72a27560b7\") " pod="openstack/nova-metadata-0" Jan 31 04:10:33 crc kubenswrapper[4667]: I0131 04:10:33.034611 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/172b5953-ebb3-4eae-b8ee-33d59574f2ac-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"172b5953-ebb3-4eae-b8ee-33d59574f2ac\") " pod="openstack/nova-cell1-conductor-0" Jan 31 04:10:33 crc kubenswrapper[4667]: I0131 04:10:33.034790 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/172b5953-ebb3-4eae-b8ee-33d59574f2ac-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"172b5953-ebb3-4eae-b8ee-33d59574f2ac\") " pod="openstack/nova-cell1-conductor-0" Jan 31 04:10:33 crc kubenswrapper[4667]: I0131 04:10:33.056732 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z8jq\" (UniqueName: \"kubernetes.io/projected/172b5953-ebb3-4eae-b8ee-33d59574f2ac-kube-api-access-8z8jq\") pod \"nova-cell1-conductor-0\" (UID: \"172b5953-ebb3-4eae-b8ee-33d59574f2ac\") " pod="openstack/nova-cell1-conductor-0" Jan 31 04:10:33 crc kubenswrapper[4667]: I0131 04:10:33.129927 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc940175-88a1-4c91-bb97-1c72a27560b7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cc940175-88a1-4c91-bb97-1c72a27560b7\") " pod="openstack/nova-metadata-0" Jan 31 04:10:33 crc kubenswrapper[4667]: I0131 04:10:33.130104 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tcx7\" (UniqueName: \"kubernetes.io/projected/cc940175-88a1-4c91-bb97-1c72a27560b7-kube-api-access-8tcx7\") pod \"nova-metadata-0\" (UID: \"cc940175-88a1-4c91-bb97-1c72a27560b7\") " pod="openstack/nova-metadata-0" Jan 31 04:10:33 crc kubenswrapper[4667]: I0131 04:10:33.130158 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc940175-88a1-4c91-bb97-1c72a27560b7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cc940175-88a1-4c91-bb97-1c72a27560b7\") " pod="openstack/nova-metadata-0" Jan 31 04:10:33 crc kubenswrapper[4667]: I0131 04:10:33.130179 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc940175-88a1-4c91-bb97-1c72a27560b7-logs\") pod \"nova-metadata-0\" (UID: \"cc940175-88a1-4c91-bb97-1c72a27560b7\") " pod="openstack/nova-metadata-0" Jan 31 04:10:33 crc kubenswrapper[4667]: I0131 04:10:33.130200 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc940175-88a1-4c91-bb97-1c72a27560b7-config-data\") pod \"nova-metadata-0\" (UID: \"cc940175-88a1-4c91-bb97-1c72a27560b7\") " pod="openstack/nova-metadata-0" Jan 31 04:10:33 crc kubenswrapper[4667]: I0131 04:10:33.132393 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc940175-88a1-4c91-bb97-1c72a27560b7-logs\") pod \"nova-metadata-0\" (UID: \"cc940175-88a1-4c91-bb97-1c72a27560b7\") " pod="openstack/nova-metadata-0" Jan 31 04:10:33 crc kubenswrapper[4667]: I0131 04:10:33.147646 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc940175-88a1-4c91-bb97-1c72a27560b7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cc940175-88a1-4c91-bb97-1c72a27560b7\") " pod="openstack/nova-metadata-0" Jan 31 04:10:33 crc kubenswrapper[4667]: I0131 04:10:33.147714 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc940175-88a1-4c91-bb97-1c72a27560b7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cc940175-88a1-4c91-bb97-1c72a27560b7\") " pod="openstack/nova-metadata-0" Jan 31 04:10:33 crc kubenswrapper[4667]: I0131 04:10:33.148349 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc940175-88a1-4c91-bb97-1c72a27560b7-config-data\") pod \"nova-metadata-0\" (UID: \"cc940175-88a1-4c91-bb97-1c72a27560b7\") " pod="openstack/nova-metadata-0" Jan 31 04:10:33 crc kubenswrapper[4667]: I0131 04:10:33.151576 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tcx7\" (UniqueName: \"kubernetes.io/projected/cc940175-88a1-4c91-bb97-1c72a27560b7-kube-api-access-8tcx7\") pod \"nova-metadata-0\" (UID: \"cc940175-88a1-4c91-bb97-1c72a27560b7\") " pod="openstack/nova-metadata-0" Jan 31 04:10:33 crc kubenswrapper[4667]: I0131 04:10:33.234623 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 31 04:10:33 crc kubenswrapper[4667]: I0131 04:10:33.286127 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 04:10:33 crc kubenswrapper[4667]: I0131 04:10:33.338439 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3e1009d-834f-4fa9-90a7-771cce6f6558" path="/var/lib/kubelet/pods/b3e1009d-834f-4fa9-90a7-771cce6f6558/volumes" Jan 31 04:10:33 crc kubenswrapper[4667]: I0131 04:10:33.719248 4667 generic.go:334] "Generic (PLEG): container finished" podID="f8e6271b-34bd-43ad-9505-c4eff960694a" containerID="2e31892dded3b6d96b9608da2963ad1f77da32c3b2e8093c0559ce612c5d6a7c" exitCode=0 Jan 31 04:10:33 crc kubenswrapper[4667]: I0131 04:10:33.720644 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f8e6271b-34bd-43ad-9505-c4eff960694a","Type":"ContainerDied","Data":"2e31892dded3b6d96b9608da2963ad1f77da32c3b2e8093c0559ce612c5d6a7c"} Jan 31 04:10:33 crc kubenswrapper[4667]: I0131 04:10:33.763500 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 04:10:33 crc kubenswrapper[4667]: I0131 04:10:33.853112 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7b2bv\" (UniqueName: \"kubernetes.io/projected/f8e6271b-34bd-43ad-9505-c4eff960694a-kube-api-access-7b2bv\") pod \"f8e6271b-34bd-43ad-9505-c4eff960694a\" (UID: \"f8e6271b-34bd-43ad-9505-c4eff960694a\") " Jan 31 04:10:33 crc kubenswrapper[4667]: I0131 04:10:33.853364 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8e6271b-34bd-43ad-9505-c4eff960694a-config-data\") pod \"f8e6271b-34bd-43ad-9505-c4eff960694a\" (UID: \"f8e6271b-34bd-43ad-9505-c4eff960694a\") " Jan 31 04:10:33 crc kubenswrapper[4667]: I0131 04:10:33.853418 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8e6271b-34bd-43ad-9505-c4eff960694a-combined-ca-bundle\") pod \"f8e6271b-34bd-43ad-9505-c4eff960694a\" (UID: \"f8e6271b-34bd-43ad-9505-c4eff960694a\") " Jan 31 04:10:33 crc kubenswrapper[4667]: I0131 04:10:33.887209 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8e6271b-34bd-43ad-9505-c4eff960694a-kube-api-access-7b2bv" (OuterVolumeSpecName: "kube-api-access-7b2bv") pod "f8e6271b-34bd-43ad-9505-c4eff960694a" (UID: "f8e6271b-34bd-43ad-9505-c4eff960694a"). InnerVolumeSpecName "kube-api-access-7b2bv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:10:33 crc kubenswrapper[4667]: I0131 04:10:33.905225 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8e6271b-34bd-43ad-9505-c4eff960694a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f8e6271b-34bd-43ad-9505-c4eff960694a" (UID: "f8e6271b-34bd-43ad-9505-c4eff960694a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:10:33 crc kubenswrapper[4667]: I0131 04:10:33.927247 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8e6271b-34bd-43ad-9505-c4eff960694a-config-data" (OuterVolumeSpecName: "config-data") pod "f8e6271b-34bd-43ad-9505-c4eff960694a" (UID: "f8e6271b-34bd-43ad-9505-c4eff960694a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:10:33 crc kubenswrapper[4667]: I0131 04:10:33.956741 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7b2bv\" (UniqueName: \"kubernetes.io/projected/f8e6271b-34bd-43ad-9505-c4eff960694a-kube-api-access-7b2bv\") on node \"crc\" DevicePath \"\"" Jan 31 04:10:33 crc kubenswrapper[4667]: I0131 04:10:33.956969 4667 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8e6271b-34bd-43ad-9505-c4eff960694a-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:10:33 crc kubenswrapper[4667]: I0131 04:10:33.957082 4667 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8e6271b-34bd-43ad-9505-c4eff960694a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:10:33 crc kubenswrapper[4667]: W0131 04:10:33.961805 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod172b5953_ebb3_4eae_b8ee_33d59574f2ac.slice/crio-d2dc101ac5b9e0694edfd598a4fdc80a846677c951890c27cf7f5f304bcaa6a2 WatchSource:0}: Error finding container d2dc101ac5b9e0694edfd598a4fdc80a846677c951890c27cf7f5f304bcaa6a2: Status 404 returned error can't find the container with id d2dc101ac5b9e0694edfd598a4fdc80a846677c951890c27cf7f5f304bcaa6a2 Jan 31 04:10:33 crc kubenswrapper[4667]: I0131 04:10:33.966179 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 31 04:10:34 crc kubenswrapper[4667]: I0131 04:10:34.038112 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 04:10:34 crc kubenswrapper[4667]: I0131 04:10:34.581606 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 04:10:34 crc kubenswrapper[4667]: I0131 04:10:34.676822 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lc2pm\" (UniqueName: \"kubernetes.io/projected/e1b9d2c1-5919-4939-8523-445092cad2a8-kube-api-access-lc2pm\") pod \"e1b9d2c1-5919-4939-8523-445092cad2a8\" (UID: \"e1b9d2c1-5919-4939-8523-445092cad2a8\") " Jan 31 04:10:34 crc kubenswrapper[4667]: I0131 04:10:34.677905 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1b9d2c1-5919-4939-8523-445092cad2a8-combined-ca-bundle\") pod \"e1b9d2c1-5919-4939-8523-445092cad2a8\" (UID: \"e1b9d2c1-5919-4939-8523-445092cad2a8\") " Jan 31 04:10:34 crc kubenswrapper[4667]: I0131 04:10:34.677991 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1b9d2c1-5919-4939-8523-445092cad2a8-logs\") pod \"e1b9d2c1-5919-4939-8523-445092cad2a8\" (UID: \"e1b9d2c1-5919-4939-8523-445092cad2a8\") " Jan 31 04:10:34 crc kubenswrapper[4667]: I0131 04:10:34.678051 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1b9d2c1-5919-4939-8523-445092cad2a8-config-data\") pod \"e1b9d2c1-5919-4939-8523-445092cad2a8\" (UID: \"e1b9d2c1-5919-4939-8523-445092cad2a8\") " Jan 31 04:10:34 crc kubenswrapper[4667]: I0131 04:10:34.678879 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1b9d2c1-5919-4939-8523-445092cad2a8-logs" (OuterVolumeSpecName: "logs") pod "e1b9d2c1-5919-4939-8523-445092cad2a8" (UID: "e1b9d2c1-5919-4939-8523-445092cad2a8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:10:34 crc kubenswrapper[4667]: I0131 04:10:34.684300 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1b9d2c1-5919-4939-8523-445092cad2a8-kube-api-access-lc2pm" (OuterVolumeSpecName: "kube-api-access-lc2pm") pod "e1b9d2c1-5919-4939-8523-445092cad2a8" (UID: "e1b9d2c1-5919-4939-8523-445092cad2a8"). InnerVolumeSpecName "kube-api-access-lc2pm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:10:34 crc kubenswrapper[4667]: I0131 04:10:34.723799 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1b9d2c1-5919-4939-8523-445092cad2a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e1b9d2c1-5919-4939-8523-445092cad2a8" (UID: "e1b9d2c1-5919-4939-8523-445092cad2a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:10:34 crc kubenswrapper[4667]: I0131 04:10:34.729077 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1b9d2c1-5919-4939-8523-445092cad2a8-config-data" (OuterVolumeSpecName: "config-data") pod "e1b9d2c1-5919-4939-8523-445092cad2a8" (UID: "e1b9d2c1-5919-4939-8523-445092cad2a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:10:34 crc kubenswrapper[4667]: I0131 04:10:34.744670 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"172b5953-ebb3-4eae-b8ee-33d59574f2ac","Type":"ContainerStarted","Data":"c2647b9c2676b6b1476bb1cc9a531130481f800fdb4565197c8b03199b64cac3"} Jan 31 04:10:34 crc kubenswrapper[4667]: I0131 04:10:34.744725 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"172b5953-ebb3-4eae-b8ee-33d59574f2ac","Type":"ContainerStarted","Data":"d2dc101ac5b9e0694edfd598a4fdc80a846677c951890c27cf7f5f304bcaa6a2"} Jan 31 04:10:34 crc kubenswrapper[4667]: I0131 04:10:34.745288 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 31 04:10:34 crc kubenswrapper[4667]: I0131 04:10:34.749988 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cc940175-88a1-4c91-bb97-1c72a27560b7","Type":"ContainerStarted","Data":"cba9133fdf995faf2d55a7394dd4ff01f165970e7f87e8624b363e2f0908451c"} Jan 31 04:10:34 crc kubenswrapper[4667]: I0131 04:10:34.750028 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cc940175-88a1-4c91-bb97-1c72a27560b7","Type":"ContainerStarted","Data":"a4d4783cf317b5b57f030b14bd27b29616d92da3626608d2cae928eedaefe55f"} Jan 31 04:10:34 crc kubenswrapper[4667]: I0131 04:10:34.750039 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cc940175-88a1-4c91-bb97-1c72a27560b7","Type":"ContainerStarted","Data":"bf1b0877a81418e165365f2daa87f4edbba8653f22ee6f0a0199af1e855ba15b"} Jan 31 04:10:34 crc kubenswrapper[4667]: I0131 04:10:34.753434 4667 generic.go:334] "Generic (PLEG): container finished" podID="e1b9d2c1-5919-4939-8523-445092cad2a8" containerID="8914066a25b5cde067722b64fd0e5e1853b49463dbc35ca70c4a15114d8cc9c3" exitCode=0 Jan 31 04:10:34 crc kubenswrapper[4667]: I0131 04:10:34.753496 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e1b9d2c1-5919-4939-8523-445092cad2a8","Type":"ContainerDied","Data":"8914066a25b5cde067722b64fd0e5e1853b49463dbc35ca70c4a15114d8cc9c3"} Jan 31 04:10:34 crc kubenswrapper[4667]: I0131 04:10:34.753525 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e1b9d2c1-5919-4939-8523-445092cad2a8","Type":"ContainerDied","Data":"b7723d1da6c495d13de0bad6ad3de664dea77b7768b28d649e649a77614a3491"} Jan 31 04:10:34 crc kubenswrapper[4667]: I0131 04:10:34.753551 4667 scope.go:117] "RemoveContainer" containerID="8914066a25b5cde067722b64fd0e5e1853b49463dbc35ca70c4a15114d8cc9c3" Jan 31 04:10:34 crc kubenswrapper[4667]: I0131 04:10:34.753653 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 04:10:34 crc kubenswrapper[4667]: I0131 04:10:34.772892 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.772870533 podStartE2EDuration="2.772870533s" podCreationTimestamp="2026-01-31 04:10:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:10:34.76711761 +0000 UTC m=+1358.283452909" watchObservedRunningTime="2026-01-31 04:10:34.772870533 +0000 UTC m=+1358.289205832" Jan 31 04:10:34 crc kubenswrapper[4667]: I0131 04:10:34.775064 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f8e6271b-34bd-43ad-9505-c4eff960694a","Type":"ContainerDied","Data":"6ab9dcaca696af56d6bf79b551c5b132adfd1bbf4b97c930adabd056026f0a31"} Jan 31 04:10:34 crc kubenswrapper[4667]: I0131 04:10:34.775165 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 04:10:34 crc kubenswrapper[4667]: I0131 04:10:34.781318 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lc2pm\" (UniqueName: \"kubernetes.io/projected/e1b9d2c1-5919-4939-8523-445092cad2a8-kube-api-access-lc2pm\") on node \"crc\" DevicePath \"\"" Jan 31 04:10:34 crc kubenswrapper[4667]: I0131 04:10:34.781353 4667 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1b9d2c1-5919-4939-8523-445092cad2a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:10:34 crc kubenswrapper[4667]: I0131 04:10:34.781364 4667 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1b9d2c1-5919-4939-8523-445092cad2a8-logs\") on node \"crc\" DevicePath \"\"" Jan 31 04:10:34 crc kubenswrapper[4667]: I0131 04:10:34.781374 4667 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1b9d2c1-5919-4939-8523-445092cad2a8-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:10:34 crc kubenswrapper[4667]: I0131 04:10:34.810207 4667 scope.go:117] "RemoveContainer" containerID="906a92cab5d3cc7dd7786eb0130aeb1d2854f0cc3769c0e8172ead36ae122878" Jan 31 04:10:34 crc kubenswrapper[4667]: I0131 04:10:34.817478 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.817458556 podStartE2EDuration="2.817458556s" podCreationTimestamp="2026-01-31 04:10:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:10:34.810615085 +0000 UTC m=+1358.326950384" watchObservedRunningTime="2026-01-31 04:10:34.817458556 +0000 UTC m=+1358.333793845" Jan 31 04:10:34 crc kubenswrapper[4667]: I0131 04:10:34.844585 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 04:10:34 crc kubenswrapper[4667]: I0131 04:10:34.916923 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 31 04:10:34 crc kubenswrapper[4667]: I0131 04:10:34.934759 4667 scope.go:117] "RemoveContainer" containerID="8914066a25b5cde067722b64fd0e5e1853b49463dbc35ca70c4a15114d8cc9c3" Jan 31 04:10:34 crc kubenswrapper[4667]: E0131 04:10:34.956897 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8914066a25b5cde067722b64fd0e5e1853b49463dbc35ca70c4a15114d8cc9c3\": container with ID starting with 8914066a25b5cde067722b64fd0e5e1853b49463dbc35ca70c4a15114d8cc9c3 not found: ID does not exist" containerID="8914066a25b5cde067722b64fd0e5e1853b49463dbc35ca70c4a15114d8cc9c3" Jan 31 04:10:34 crc kubenswrapper[4667]: I0131 04:10:34.957399 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8914066a25b5cde067722b64fd0e5e1853b49463dbc35ca70c4a15114d8cc9c3"} err="failed to get container status \"8914066a25b5cde067722b64fd0e5e1853b49463dbc35ca70c4a15114d8cc9c3\": rpc error: code = NotFound desc = could not find container \"8914066a25b5cde067722b64fd0e5e1853b49463dbc35ca70c4a15114d8cc9c3\": container with ID starting with 8914066a25b5cde067722b64fd0e5e1853b49463dbc35ca70c4a15114d8cc9c3 not found: ID does not exist" Jan 31 04:10:34 crc kubenswrapper[4667]: I0131 04:10:34.957434 4667 scope.go:117] "RemoveContainer" containerID="906a92cab5d3cc7dd7786eb0130aeb1d2854f0cc3769c0e8172ead36ae122878" Jan 31 04:10:34 crc kubenswrapper[4667]: E0131 04:10:34.962483 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"906a92cab5d3cc7dd7786eb0130aeb1d2854f0cc3769c0e8172ead36ae122878\": container with ID starting with 906a92cab5d3cc7dd7786eb0130aeb1d2854f0cc3769c0e8172ead36ae122878 not found: ID does not exist" containerID="906a92cab5d3cc7dd7786eb0130aeb1d2854f0cc3769c0e8172ead36ae122878" Jan 31 04:10:34 crc kubenswrapper[4667]: I0131 04:10:34.962542 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"906a92cab5d3cc7dd7786eb0130aeb1d2854f0cc3769c0e8172ead36ae122878"} err="failed to get container status \"906a92cab5d3cc7dd7786eb0130aeb1d2854f0cc3769c0e8172ead36ae122878\": rpc error: code = NotFound desc = could not find container \"906a92cab5d3cc7dd7786eb0130aeb1d2854f0cc3769c0e8172ead36ae122878\": container with ID starting with 906a92cab5d3cc7dd7786eb0130aeb1d2854f0cc3769c0e8172ead36ae122878 not found: ID does not exist" Jan 31 04:10:34 crc kubenswrapper[4667]: I0131 04:10:34.962595 4667 scope.go:117] "RemoveContainer" containerID="2e31892dded3b6d96b9608da2963ad1f77da32c3b2e8093c0559ce612c5d6a7c" Jan 31 04:10:34 crc kubenswrapper[4667]: I0131 04:10:34.977214 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 04:10:34 crc kubenswrapper[4667]: I0131 04:10:34.993615 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 31 04:10:34 crc kubenswrapper[4667]: E0131 04:10:34.995755 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1b9d2c1-5919-4939-8523-445092cad2a8" containerName="nova-api-api" Jan 31 04:10:34 crc kubenswrapper[4667]: I0131 04:10:34.995792 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1b9d2c1-5919-4939-8523-445092cad2a8" containerName="nova-api-api" Jan 31 04:10:34 crc kubenswrapper[4667]: E0131 04:10:34.995887 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8e6271b-34bd-43ad-9505-c4eff960694a" containerName="nova-scheduler-scheduler" Jan 31 04:10:34 crc kubenswrapper[4667]: I0131 04:10:34.995906 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8e6271b-34bd-43ad-9505-c4eff960694a" containerName="nova-scheduler-scheduler" Jan 31 04:10:34 crc kubenswrapper[4667]: E0131 04:10:34.995948 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1b9d2c1-5919-4939-8523-445092cad2a8" containerName="nova-api-log" Jan 31 04:10:34 crc kubenswrapper[4667]: I0131 04:10:34.995963 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1b9d2c1-5919-4939-8523-445092cad2a8" containerName="nova-api-log" Jan 31 04:10:34 crc kubenswrapper[4667]: I0131 04:10:34.996992 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1b9d2c1-5919-4939-8523-445092cad2a8" containerName="nova-api-log" Jan 31 04:10:34 crc kubenswrapper[4667]: I0131 04:10:34.997025 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8e6271b-34bd-43ad-9505-c4eff960694a" containerName="nova-scheduler-scheduler" Jan 31 04:10:34 crc kubenswrapper[4667]: I0131 04:10:34.997044 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1b9d2c1-5919-4939-8523-445092cad2a8" containerName="nova-api-api" Jan 31 04:10:35 crc kubenswrapper[4667]: I0131 04:10:35.000441 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 04:10:35 crc kubenswrapper[4667]: I0131 04:10:35.007955 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 31 04:10:35 crc kubenswrapper[4667]: I0131 04:10:35.071002 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 04:10:35 crc kubenswrapper[4667]: I0131 04:10:35.081667 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 04:10:35 crc kubenswrapper[4667]: I0131 04:10:35.083748 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 04:10:35 crc kubenswrapper[4667]: I0131 04:10:35.086729 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 31 04:10:35 crc kubenswrapper[4667]: I0131 04:10:35.092824 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 04:10:35 crc kubenswrapper[4667]: I0131 04:10:35.104145 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321\") " pod="openstack/nova-api-0" Jan 31 04:10:35 crc kubenswrapper[4667]: I0131 04:10:35.104193 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321-logs\") pod \"nova-api-0\" (UID: \"1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321\") " pod="openstack/nova-api-0" Jan 31 04:10:35 crc kubenswrapper[4667]: I0131 04:10:35.104337 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7qfg\" (UniqueName: \"kubernetes.io/projected/1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321-kube-api-access-c7qfg\") pod \"nova-api-0\" (UID: \"1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321\") " pod="openstack/nova-api-0" Jan 31 04:10:35 crc kubenswrapper[4667]: I0131 04:10:35.104366 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321-config-data\") pod \"nova-api-0\" (UID: \"1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321\") " pod="openstack/nova-api-0" Jan 31 04:10:35 crc kubenswrapper[4667]: I0131 04:10:35.110414 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 04:10:35 crc kubenswrapper[4667]: I0131 04:10:35.206542 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2\") " pod="openstack/nova-scheduler-0" Jan 31 04:10:35 crc kubenswrapper[4667]: I0131 04:10:35.206611 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321\") " pod="openstack/nova-api-0" Jan 31 04:10:35 crc kubenswrapper[4667]: I0131 04:10:35.206638 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321-logs\") pod \"nova-api-0\" (UID: \"1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321\") " pod="openstack/nova-api-0" Jan 31 04:10:35 crc kubenswrapper[4667]: I0131 04:10:35.206770 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdm4x\" (UniqueName: \"kubernetes.io/projected/0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2-kube-api-access-tdm4x\") pod \"nova-scheduler-0\" (UID: \"0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2\") " pod="openstack/nova-scheduler-0" Jan 31 04:10:35 crc kubenswrapper[4667]: I0131 04:10:35.206812 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2-config-data\") pod \"nova-scheduler-0\" (UID: \"0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2\") " pod="openstack/nova-scheduler-0" Jan 31 04:10:35 crc kubenswrapper[4667]: I0131 04:10:35.206950 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7qfg\" (UniqueName: \"kubernetes.io/projected/1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321-kube-api-access-c7qfg\") pod \"nova-api-0\" (UID: \"1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321\") " pod="openstack/nova-api-0" Jan 31 04:10:35 crc kubenswrapper[4667]: I0131 04:10:35.206974 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321-config-data\") pod \"nova-api-0\" (UID: \"1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321\") " pod="openstack/nova-api-0" Jan 31 04:10:35 crc kubenswrapper[4667]: I0131 04:10:35.207825 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321-logs\") pod \"nova-api-0\" (UID: \"1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321\") " pod="openstack/nova-api-0" Jan 31 04:10:35 crc kubenswrapper[4667]: I0131 04:10:35.223399 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321\") " pod="openstack/nova-api-0" Jan 31 04:10:35 crc kubenswrapper[4667]: I0131 04:10:35.223477 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321-config-data\") pod \"nova-api-0\" (UID: \"1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321\") " pod="openstack/nova-api-0" Jan 31 04:10:35 crc kubenswrapper[4667]: I0131 04:10:35.227985 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7qfg\" (UniqueName: \"kubernetes.io/projected/1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321-kube-api-access-c7qfg\") pod \"nova-api-0\" (UID: \"1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321\") " pod="openstack/nova-api-0" Jan 31 04:10:35 crc kubenswrapper[4667]: I0131 04:10:35.302229 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1b9d2c1-5919-4939-8523-445092cad2a8" path="/var/lib/kubelet/pods/e1b9d2c1-5919-4939-8523-445092cad2a8/volumes" Jan 31 04:10:35 crc kubenswrapper[4667]: I0131 04:10:35.304292 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8e6271b-34bd-43ad-9505-c4eff960694a" path="/var/lib/kubelet/pods/f8e6271b-34bd-43ad-9505-c4eff960694a/volumes" Jan 31 04:10:35 crc kubenswrapper[4667]: I0131 04:10:35.310266 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdm4x\" (UniqueName: \"kubernetes.io/projected/0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2-kube-api-access-tdm4x\") pod \"nova-scheduler-0\" (UID: \"0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2\") " pod="openstack/nova-scheduler-0" Jan 31 04:10:35 crc kubenswrapper[4667]: I0131 04:10:35.310388 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2-config-data\") pod \"nova-scheduler-0\" (UID: \"0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2\") " pod="openstack/nova-scheduler-0" Jan 31 04:10:35 crc kubenswrapper[4667]: I0131 04:10:35.310816 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2\") " pod="openstack/nova-scheduler-0" Jan 31 04:10:35 crc kubenswrapper[4667]: I0131 04:10:35.328949 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2\") " pod="openstack/nova-scheduler-0" Jan 31 04:10:35 crc kubenswrapper[4667]: I0131 04:10:35.330191 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2-config-data\") pod \"nova-scheduler-0\" (UID: \"0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2\") " pod="openstack/nova-scheduler-0" Jan 31 04:10:35 crc kubenswrapper[4667]: I0131 04:10:35.336138 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdm4x\" (UniqueName: \"kubernetes.io/projected/0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2-kube-api-access-tdm4x\") pod \"nova-scheduler-0\" (UID: \"0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2\") " pod="openstack/nova-scheduler-0" Jan 31 04:10:35 crc kubenswrapper[4667]: I0131 04:10:35.352161 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 04:10:35 crc kubenswrapper[4667]: I0131 04:10:35.416070 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 04:10:35 crc kubenswrapper[4667]: W0131 04:10:35.975611 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0dcf0db8_eeb3_49d3_8e36_a69f48aaf7e2.slice/crio-0e3def4e763e31de1e874e63d9e1dee7e46d2caffc2c1db967cc3bea21f44a8e WatchSource:0}: Error finding container 0e3def4e763e31de1e874e63d9e1dee7e46d2caffc2c1db967cc3bea21f44a8e: Status 404 returned error can't find the container with id 0e3def4e763e31de1e874e63d9e1dee7e46d2caffc2c1db967cc3bea21f44a8e Jan 31 04:10:35 crc kubenswrapper[4667]: I0131 04:10:35.985862 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 04:10:36 crc kubenswrapper[4667]: W0131 04:10:36.054569 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c7127a4_f6e5_4c2c_bd1d_6203ad9fb321.slice/crio-78ae642d2e19289dacda4db5b2e86b00d838df02780fd8d1414dcf684b6b8107 WatchSource:0}: Error finding container 78ae642d2e19289dacda4db5b2e86b00d838df02780fd8d1414dcf684b6b8107: Status 404 returned error can't find the container with id 78ae642d2e19289dacda4db5b2e86b00d838df02780fd8d1414dcf684b6b8107 Jan 31 04:10:36 crc kubenswrapper[4667]: I0131 04:10:36.062680 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 04:10:36 crc kubenswrapper[4667]: I0131 04:10:36.817455 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321","Type":"ContainerStarted","Data":"976a086def1af0f85f05cb073f62a2d56f8a4de5aa409120a71687681df43cec"} Jan 31 04:10:36 crc kubenswrapper[4667]: I0131 04:10:36.817987 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321","Type":"ContainerStarted","Data":"06ba30b7dc4b9df257d4934133689541dba29a5f5be72926d7637d51318a18df"} Jan 31 04:10:36 crc kubenswrapper[4667]: I0131 04:10:36.817998 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321","Type":"ContainerStarted","Data":"78ae642d2e19289dacda4db5b2e86b00d838df02780fd8d1414dcf684b6b8107"} Jan 31 04:10:36 crc kubenswrapper[4667]: I0131 04:10:36.820263 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2","Type":"ContainerStarted","Data":"67ba812a3dbb1bddb983d786ecb4a9f2a74e9a86a95066e5d8fe48e58923a1bd"} Jan 31 04:10:36 crc kubenswrapper[4667]: I0131 04:10:36.820324 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2","Type":"ContainerStarted","Data":"0e3def4e763e31de1e874e63d9e1dee7e46d2caffc2c1db967cc3bea21f44a8e"} Jan 31 04:10:36 crc kubenswrapper[4667]: I0131 04:10:36.849757 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.849731664 podStartE2EDuration="2.849731664s" podCreationTimestamp="2026-01-31 04:10:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:10:36.839312458 +0000 UTC m=+1360.355647757" watchObservedRunningTime="2026-01-31 04:10:36.849731664 +0000 UTC m=+1360.366066963" Jan 31 04:10:38 crc kubenswrapper[4667]: I0131 04:10:38.287674 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 04:10:38 crc kubenswrapper[4667]: I0131 04:10:38.288235 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 04:10:40 crc kubenswrapper[4667]: I0131 04:10:40.416426 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 31 04:10:41 crc kubenswrapper[4667]: I0131 04:10:41.755030 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-78789d8f44-5trmc" Jan 31 04:10:41 crc kubenswrapper[4667]: I0131 04:10:41.756188 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-78789d8f44-5trmc" Jan 31 04:10:41 crc kubenswrapper[4667]: I0131 04:10:41.757510 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-78789d8f44-5trmc" podUID="b7f8fd18-06a0-432e-8c17-c9b432b6ca69" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 31 04:10:41 crc kubenswrapper[4667]: I0131 04:10:41.843890 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-86c748c4d6-2grmh" Jan 31 04:10:41 crc kubenswrapper[4667]: I0131 04:10:41.844253 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-86c748c4d6-2grmh" Jan 31 04:10:41 crc kubenswrapper[4667]: I0131 04:10:41.845624 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-86c748c4d6-2grmh" podUID="c6974567-3bea-447a-bb8b-ced22b6d34ce" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Jan 31 04:10:43 crc kubenswrapper[4667]: I0131 04:10:43.298253 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 31 04:10:43 crc kubenswrapper[4667]: I0131 04:10:43.302088 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 31 04:10:43 crc kubenswrapper[4667]: I0131 04:10:43.302394 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 31 04:10:43 crc kubenswrapper[4667]: I0131 04:10:43.342966 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=9.342942814 podStartE2EDuration="9.342942814s" podCreationTimestamp="2026-01-31 04:10:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:10:36.878456127 +0000 UTC m=+1360.394791426" watchObservedRunningTime="2026-01-31 04:10:43.342942814 +0000 UTC m=+1366.859278113" Jan 31 04:10:44 crc kubenswrapper[4667]: I0131 04:10:44.316251 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="cc940175-88a1-4c91-bb97-1c72a27560b7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 04:10:44 crc kubenswrapper[4667]: I0131 04:10:44.316558 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="cc940175-88a1-4c91-bb97-1c72a27560b7" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 04:10:45 crc kubenswrapper[4667]: I0131 04:10:45.351400 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 04:10:45 crc kubenswrapper[4667]: I0131 04:10:45.353163 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 04:10:45 crc kubenswrapper[4667]: I0131 04:10:45.417348 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 31 04:10:45 crc kubenswrapper[4667]: I0131 04:10:45.460642 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 31 04:10:45 crc kubenswrapper[4667]: I0131 04:10:45.957833 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 31 04:10:46 crc kubenswrapper[4667]: I0131 04:10:46.351684 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 04:10:46 crc kubenswrapper[4667]: I0131 04:10:46.394170 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 31 04:10:49 crc kubenswrapper[4667]: I0131 04:10:49.483693 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:10:49 crc kubenswrapper[4667]: I0131 04:10:49.633472 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b2ce69c-d98e-40a4-b6e9-be00a97ca5af-config-data\") pod \"3b2ce69c-d98e-40a4-b6e9-be00a97ca5af\" (UID: \"3b2ce69c-d98e-40a4-b6e9-be00a97ca5af\") " Jan 31 04:10:49 crc kubenswrapper[4667]: I0131 04:10:49.633565 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpzf6\" (UniqueName: \"kubernetes.io/projected/3b2ce69c-d98e-40a4-b6e9-be00a97ca5af-kube-api-access-rpzf6\") pod \"3b2ce69c-d98e-40a4-b6e9-be00a97ca5af\" (UID: \"3b2ce69c-d98e-40a4-b6e9-be00a97ca5af\") " Jan 31 04:10:49 crc kubenswrapper[4667]: I0131 04:10:49.633793 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b2ce69c-d98e-40a4-b6e9-be00a97ca5af-combined-ca-bundle\") pod \"3b2ce69c-d98e-40a4-b6e9-be00a97ca5af\" (UID: \"3b2ce69c-d98e-40a4-b6e9-be00a97ca5af\") " Jan 31 04:10:49 crc kubenswrapper[4667]: I0131 04:10:49.658675 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b2ce69c-d98e-40a4-b6e9-be00a97ca5af-kube-api-access-rpzf6" (OuterVolumeSpecName: "kube-api-access-rpzf6") pod "3b2ce69c-d98e-40a4-b6e9-be00a97ca5af" (UID: "3b2ce69c-d98e-40a4-b6e9-be00a97ca5af"). InnerVolumeSpecName "kube-api-access-rpzf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:10:49 crc kubenswrapper[4667]: I0131 04:10:49.668701 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b2ce69c-d98e-40a4-b6e9-be00a97ca5af-config-data" (OuterVolumeSpecName: "config-data") pod "3b2ce69c-d98e-40a4-b6e9-be00a97ca5af" (UID: "3b2ce69c-d98e-40a4-b6e9-be00a97ca5af"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:10:49 crc kubenswrapper[4667]: I0131 04:10:49.670009 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b2ce69c-d98e-40a4-b6e9-be00a97ca5af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b2ce69c-d98e-40a4-b6e9-be00a97ca5af" (UID: "3b2ce69c-d98e-40a4-b6e9-be00a97ca5af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:10:49 crc kubenswrapper[4667]: I0131 04:10:49.737598 4667 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b2ce69c-d98e-40a4-b6e9-be00a97ca5af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:10:49 crc kubenswrapper[4667]: I0131 04:10:49.737642 4667 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b2ce69c-d98e-40a4-b6e9-be00a97ca5af-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:10:49 crc kubenswrapper[4667]: I0131 04:10:49.737663 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpzf6\" (UniqueName: \"kubernetes.io/projected/3b2ce69c-d98e-40a4-b6e9-be00a97ca5af-kube-api-access-rpzf6\") on node \"crc\" DevicePath \"\"" Jan 31 04:10:49 crc kubenswrapper[4667]: I0131 04:10:49.750804 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 31 04:10:49 crc kubenswrapper[4667]: I0131 04:10:49.984270 4667 generic.go:334] "Generic (PLEG): container finished" podID="3b2ce69c-d98e-40a4-b6e9-be00a97ca5af" containerID="672a55b391cc5adf1f60292ca80ce2df2d1792ffb35906a85d4ff27725aac6b0" exitCode=137 Jan 31 04:10:49 crc kubenswrapper[4667]: I0131 04:10:49.984516 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3b2ce69c-d98e-40a4-b6e9-be00a97ca5af","Type":"ContainerDied","Data":"672a55b391cc5adf1f60292ca80ce2df2d1792ffb35906a85d4ff27725aac6b0"} Jan 31 04:10:49 crc kubenswrapper[4667]: I0131 04:10:49.984747 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3b2ce69c-d98e-40a4-b6e9-be00a97ca5af","Type":"ContainerDied","Data":"22e5261a75fb26e7c42c5bcc924b67dfa5b2bd1eae95adce64df57abf689ff29"} Jan 31 04:10:49 crc kubenswrapper[4667]: I0131 04:10:49.984778 4667 scope.go:117] "RemoveContainer" containerID="672a55b391cc5adf1f60292ca80ce2df2d1792ffb35906a85d4ff27725aac6b0" Jan 31 04:10:49 crc kubenswrapper[4667]: I0131 04:10:49.984613 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:10:50 crc kubenswrapper[4667]: I0131 04:10:50.025925 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 04:10:50 crc kubenswrapper[4667]: I0131 04:10:50.034054 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 04:10:50 crc kubenswrapper[4667]: I0131 04:10:50.054594 4667 scope.go:117] "RemoveContainer" containerID="672a55b391cc5adf1f60292ca80ce2df2d1792ffb35906a85d4ff27725aac6b0" Jan 31 04:10:50 crc kubenswrapper[4667]: E0131 04:10:50.065413 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"672a55b391cc5adf1f60292ca80ce2df2d1792ffb35906a85d4ff27725aac6b0\": container with ID starting with 672a55b391cc5adf1f60292ca80ce2df2d1792ffb35906a85d4ff27725aac6b0 not found: ID does not exist" containerID="672a55b391cc5adf1f60292ca80ce2df2d1792ffb35906a85d4ff27725aac6b0" Jan 31 04:10:50 crc kubenswrapper[4667]: I0131 04:10:50.065466 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"672a55b391cc5adf1f60292ca80ce2df2d1792ffb35906a85d4ff27725aac6b0"} err="failed to get container status \"672a55b391cc5adf1f60292ca80ce2df2d1792ffb35906a85d4ff27725aac6b0\": rpc error: code = NotFound desc = could not find container \"672a55b391cc5adf1f60292ca80ce2df2d1792ffb35906a85d4ff27725aac6b0\": container with ID starting with 672a55b391cc5adf1f60292ca80ce2df2d1792ffb35906a85d4ff27725aac6b0 not found: ID does not exist" Jan 31 04:10:50 crc kubenswrapper[4667]: I0131 04:10:50.096495 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 04:10:50 crc kubenswrapper[4667]: E0131 04:10:50.101715 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b2ce69c-d98e-40a4-b6e9-be00a97ca5af" containerName="nova-cell1-novncproxy-novncproxy" Jan 31 04:10:50 crc kubenswrapper[4667]: I0131 04:10:50.101753 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b2ce69c-d98e-40a4-b6e9-be00a97ca5af" containerName="nova-cell1-novncproxy-novncproxy" Jan 31 04:10:50 crc kubenswrapper[4667]: I0131 04:10:50.105252 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b2ce69c-d98e-40a4-b6e9-be00a97ca5af" containerName="nova-cell1-novncproxy-novncproxy" Jan 31 04:10:50 crc kubenswrapper[4667]: I0131 04:10:50.107956 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:10:50 crc kubenswrapper[4667]: I0131 04:10:50.111664 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 04:10:50 crc kubenswrapper[4667]: I0131 04:10:50.114547 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 31 04:10:50 crc kubenswrapper[4667]: I0131 04:10:50.115325 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 31 04:10:50 crc kubenswrapper[4667]: I0131 04:10:50.117248 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 31 04:10:50 crc kubenswrapper[4667]: I0131 04:10:50.258949 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4aa65868-008b-4a37-ba24-d4d3872c00c7-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4aa65868-008b-4a37-ba24-d4d3872c00c7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:10:50 crc kubenswrapper[4667]: I0131 04:10:50.259018 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qfwz\" (UniqueName: \"kubernetes.io/projected/4aa65868-008b-4a37-ba24-d4d3872c00c7-kube-api-access-2qfwz\") pod \"nova-cell1-novncproxy-0\" (UID: \"4aa65868-008b-4a37-ba24-d4d3872c00c7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:10:50 crc kubenswrapper[4667]: I0131 04:10:50.259328 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aa65868-008b-4a37-ba24-d4d3872c00c7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4aa65868-008b-4a37-ba24-d4d3872c00c7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:10:50 crc kubenswrapper[4667]: I0131 04:10:50.259521 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4aa65868-008b-4a37-ba24-d4d3872c00c7-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4aa65868-008b-4a37-ba24-d4d3872c00c7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:10:50 crc kubenswrapper[4667]: I0131 04:10:50.259854 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4aa65868-008b-4a37-ba24-d4d3872c00c7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4aa65868-008b-4a37-ba24-d4d3872c00c7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:10:50 crc kubenswrapper[4667]: I0131 04:10:50.361941 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aa65868-008b-4a37-ba24-d4d3872c00c7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4aa65868-008b-4a37-ba24-d4d3872c00c7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:10:50 crc kubenswrapper[4667]: I0131 04:10:50.362041 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4aa65868-008b-4a37-ba24-d4d3872c00c7-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4aa65868-008b-4a37-ba24-d4d3872c00c7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:10:50 crc kubenswrapper[4667]: I0131 04:10:50.362140 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4aa65868-008b-4a37-ba24-d4d3872c00c7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4aa65868-008b-4a37-ba24-d4d3872c00c7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:10:50 crc kubenswrapper[4667]: I0131 04:10:50.362170 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4aa65868-008b-4a37-ba24-d4d3872c00c7-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4aa65868-008b-4a37-ba24-d4d3872c00c7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:10:50 crc kubenswrapper[4667]: I0131 04:10:50.362192 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qfwz\" (UniqueName: \"kubernetes.io/projected/4aa65868-008b-4a37-ba24-d4d3872c00c7-kube-api-access-2qfwz\") pod \"nova-cell1-novncproxy-0\" (UID: \"4aa65868-008b-4a37-ba24-d4d3872c00c7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:10:50 crc kubenswrapper[4667]: I0131 04:10:50.375868 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aa65868-008b-4a37-ba24-d4d3872c00c7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4aa65868-008b-4a37-ba24-d4d3872c00c7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:10:50 crc kubenswrapper[4667]: I0131 04:10:50.376459 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4aa65868-008b-4a37-ba24-d4d3872c00c7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4aa65868-008b-4a37-ba24-d4d3872c00c7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:10:50 crc kubenswrapper[4667]: I0131 04:10:50.377007 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4aa65868-008b-4a37-ba24-d4d3872c00c7-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4aa65868-008b-4a37-ba24-d4d3872c00c7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:10:50 crc kubenswrapper[4667]: I0131 04:10:50.388956 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4aa65868-008b-4a37-ba24-d4d3872c00c7-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4aa65868-008b-4a37-ba24-d4d3872c00c7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:10:50 crc kubenswrapper[4667]: I0131 04:10:50.407920 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qfwz\" (UniqueName: \"kubernetes.io/projected/4aa65868-008b-4a37-ba24-d4d3872c00c7-kube-api-access-2qfwz\") pod \"nova-cell1-novncproxy-0\" (UID: \"4aa65868-008b-4a37-ba24-d4d3872c00c7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:10:50 crc kubenswrapper[4667]: I0131 04:10:50.427920 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:10:50 crc kubenswrapper[4667]: I0131 04:10:50.929686 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 31 04:10:50 crc kubenswrapper[4667]: I0131 04:10:50.999630 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4aa65868-008b-4a37-ba24-d4d3872c00c7","Type":"ContainerStarted","Data":"4860df2d642253de829eb3436df232e1b6058400acac61cdd0ce59e78bad1cf1"} Jan 31 04:10:51 crc kubenswrapper[4667]: I0131 04:10:51.298015 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b2ce69c-d98e-40a4-b6e9-be00a97ca5af" path="/var/lib/kubelet/pods/3b2ce69c-d98e-40a4-b6e9-be00a97ca5af/volumes" Jan 31 04:10:51 crc kubenswrapper[4667]: I0131 04:10:51.755678 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-78789d8f44-5trmc" podUID="b7f8fd18-06a0-432e-8c17-c9b432b6ca69" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 31 04:10:51 crc kubenswrapper[4667]: I0131 04:10:51.843649 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-86c748c4d6-2grmh" podUID="c6974567-3bea-447a-bb8b-ced22b6d34ce" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Jan 31 04:10:52 crc kubenswrapper[4667]: I0131 04:10:52.022277 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4aa65868-008b-4a37-ba24-d4d3872c00c7","Type":"ContainerStarted","Data":"31b2d434783a69a38446b74249b6f623d3638aec9d1078e5c96b2ce002ab94b1"} Jan 31 04:10:52 crc kubenswrapper[4667]: I0131 04:10:52.049349 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.049328992 podStartE2EDuration="2.049328992s" podCreationTimestamp="2026-01-31 04:10:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:10:52.045740207 +0000 UTC m=+1375.562075516" watchObservedRunningTime="2026-01-31 04:10:52.049328992 +0000 UTC m=+1375.565664291" Jan 31 04:10:53 crc kubenswrapper[4667]: I0131 04:10:53.295621 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 31 04:10:53 crc kubenswrapper[4667]: I0131 04:10:53.302327 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 31 04:10:53 crc kubenswrapper[4667]: I0131 04:10:53.320348 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 31 04:10:54 crc kubenswrapper[4667]: I0131 04:10:54.082099 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 31 04:10:55 crc kubenswrapper[4667]: I0131 04:10:55.357400 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 31 04:10:55 crc kubenswrapper[4667]: I0131 04:10:55.358274 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 31 04:10:55 crc kubenswrapper[4667]: I0131 04:10:55.361920 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 31 04:10:55 crc kubenswrapper[4667]: I0131 04:10:55.369518 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 31 04:10:55 crc kubenswrapper[4667]: I0131 04:10:55.428176 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:10:56 crc kubenswrapper[4667]: I0131 04:10:56.069407 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 31 04:10:56 crc kubenswrapper[4667]: I0131 04:10:56.073374 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 31 04:10:56 crc kubenswrapper[4667]: I0131 04:10:56.380071 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-zq45f"] Jan 31 04:10:56 crc kubenswrapper[4667]: I0131 04:10:56.381819 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-zq45f" Jan 31 04:10:56 crc kubenswrapper[4667]: I0131 04:10:56.407275 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-zq45f"] Jan 31 04:10:56 crc kubenswrapper[4667]: I0131 04:10:56.512441 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3110271e-39e9-431e-a5dd-880758179c6c-config\") pod \"dnsmasq-dns-89c5cd4d5-zq45f\" (UID: \"3110271e-39e9-431e-a5dd-880758179c6c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-zq45f" Jan 31 04:10:56 crc kubenswrapper[4667]: I0131 04:10:56.512503 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mggr9\" (UniqueName: \"kubernetes.io/projected/3110271e-39e9-431e-a5dd-880758179c6c-kube-api-access-mggr9\") pod \"dnsmasq-dns-89c5cd4d5-zq45f\" (UID: \"3110271e-39e9-431e-a5dd-880758179c6c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-zq45f" Jan 31 04:10:56 crc kubenswrapper[4667]: I0131 04:10:56.512541 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3110271e-39e9-431e-a5dd-880758179c6c-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-zq45f\" (UID: \"3110271e-39e9-431e-a5dd-880758179c6c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-zq45f" Jan 31 04:10:56 crc kubenswrapper[4667]: I0131 04:10:56.512912 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3110271e-39e9-431e-a5dd-880758179c6c-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-zq45f\" (UID: \"3110271e-39e9-431e-a5dd-880758179c6c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-zq45f" Jan 31 04:10:56 crc kubenswrapper[4667]: I0131 04:10:56.513053 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3110271e-39e9-431e-a5dd-880758179c6c-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-zq45f\" (UID: \"3110271e-39e9-431e-a5dd-880758179c6c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-zq45f" Jan 31 04:10:56 crc kubenswrapper[4667]: I0131 04:10:56.513191 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3110271e-39e9-431e-a5dd-880758179c6c-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-zq45f\" (UID: \"3110271e-39e9-431e-a5dd-880758179c6c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-zq45f" Jan 31 04:10:56 crc kubenswrapper[4667]: I0131 04:10:56.615316 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3110271e-39e9-431e-a5dd-880758179c6c-config\") pod \"dnsmasq-dns-89c5cd4d5-zq45f\" (UID: \"3110271e-39e9-431e-a5dd-880758179c6c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-zq45f" Jan 31 04:10:56 crc kubenswrapper[4667]: I0131 04:10:56.615396 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mggr9\" (UniqueName: \"kubernetes.io/projected/3110271e-39e9-431e-a5dd-880758179c6c-kube-api-access-mggr9\") pod \"dnsmasq-dns-89c5cd4d5-zq45f\" (UID: \"3110271e-39e9-431e-a5dd-880758179c6c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-zq45f" Jan 31 04:10:56 crc kubenswrapper[4667]: I0131 04:10:56.615440 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3110271e-39e9-431e-a5dd-880758179c6c-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-zq45f\" (UID: \"3110271e-39e9-431e-a5dd-880758179c6c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-zq45f" Jan 31 04:10:56 crc kubenswrapper[4667]: I0131 04:10:56.615487 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3110271e-39e9-431e-a5dd-880758179c6c-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-zq45f\" (UID: \"3110271e-39e9-431e-a5dd-880758179c6c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-zq45f" Jan 31 04:10:56 crc kubenswrapper[4667]: I0131 04:10:56.615517 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3110271e-39e9-431e-a5dd-880758179c6c-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-zq45f\" (UID: \"3110271e-39e9-431e-a5dd-880758179c6c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-zq45f" Jan 31 04:10:56 crc kubenswrapper[4667]: I0131 04:10:56.616531 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3110271e-39e9-431e-a5dd-880758179c6c-config\") pod \"dnsmasq-dns-89c5cd4d5-zq45f\" (UID: \"3110271e-39e9-431e-a5dd-880758179c6c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-zq45f" Jan 31 04:10:56 crc kubenswrapper[4667]: I0131 04:10:56.615554 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3110271e-39e9-431e-a5dd-880758179c6c-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-zq45f\" (UID: \"3110271e-39e9-431e-a5dd-880758179c6c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-zq45f" Jan 31 04:10:56 crc kubenswrapper[4667]: I0131 04:10:56.616545 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3110271e-39e9-431e-a5dd-880758179c6c-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-zq45f\" (UID: \"3110271e-39e9-431e-a5dd-880758179c6c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-zq45f" Jan 31 04:10:56 crc kubenswrapper[4667]: I0131 04:10:56.616619 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3110271e-39e9-431e-a5dd-880758179c6c-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-zq45f\" (UID: \"3110271e-39e9-431e-a5dd-880758179c6c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-zq45f" Jan 31 04:10:56 crc kubenswrapper[4667]: I0131 04:10:56.616785 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3110271e-39e9-431e-a5dd-880758179c6c-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-zq45f\" (UID: \"3110271e-39e9-431e-a5dd-880758179c6c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-zq45f" Jan 31 04:10:56 crc kubenswrapper[4667]: I0131 04:10:56.617351 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3110271e-39e9-431e-a5dd-880758179c6c-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-zq45f\" (UID: \"3110271e-39e9-431e-a5dd-880758179c6c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-zq45f" Jan 31 04:10:56 crc kubenswrapper[4667]: I0131 04:10:56.646984 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mggr9\" (UniqueName: \"kubernetes.io/projected/3110271e-39e9-431e-a5dd-880758179c6c-kube-api-access-mggr9\") pod \"dnsmasq-dns-89c5cd4d5-zq45f\" (UID: \"3110271e-39e9-431e-a5dd-880758179c6c\") " pod="openstack/dnsmasq-dns-89c5cd4d5-zq45f" Jan 31 04:10:56 crc kubenswrapper[4667]: I0131 04:10:56.715969 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-zq45f" Jan 31 04:10:57 crc kubenswrapper[4667]: I0131 04:10:57.717690 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-zq45f"] Jan 31 04:10:58 crc kubenswrapper[4667]: I0131 04:10:58.095227 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-zq45f" event={"ID":"3110271e-39e9-431e-a5dd-880758179c6c","Type":"ContainerStarted","Data":"0f5e51e10d43b469a556ebd91ce1fb8b1cc085160205ae9712b4efd0eeee893c"} Jan 31 04:10:58 crc kubenswrapper[4667]: I0131 04:10:58.095605 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-zq45f" event={"ID":"3110271e-39e9-431e-a5dd-880758179c6c","Type":"ContainerStarted","Data":"9d09d7b3b9debff2199a2e05bfa578f16a7608f51590714bab879ade4de3a250"} Jan 31 04:10:59 crc kubenswrapper[4667]: I0131 04:10:59.106778 4667 generic.go:334] "Generic (PLEG): container finished" podID="3110271e-39e9-431e-a5dd-880758179c6c" containerID="0f5e51e10d43b469a556ebd91ce1fb8b1cc085160205ae9712b4efd0eeee893c" exitCode=0 Jan 31 04:10:59 crc kubenswrapper[4667]: I0131 04:10:59.106880 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-zq45f" event={"ID":"3110271e-39e9-431e-a5dd-880758179c6c","Type":"ContainerDied","Data":"0f5e51e10d43b469a556ebd91ce1fb8b1cc085160205ae9712b4efd0eeee893c"} Jan 31 04:10:59 crc kubenswrapper[4667]: I0131 04:10:59.443832 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:10:59 crc kubenswrapper[4667]: I0131 04:10:59.444609 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27ea74e3-7a69-49f6-9bb0-2ccb5f64971f" containerName="ceilometer-central-agent" containerID="cri-o://86e9cca7895c69c2a106eaed4f81e2df2b707dbddf9b0b60c93de799023e38d9" gracePeriod=30 Jan 31 04:10:59 crc kubenswrapper[4667]: I0131 04:10:59.444689 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27ea74e3-7a69-49f6-9bb0-2ccb5f64971f" containerName="sg-core" containerID="cri-o://6ca3b1af75834ef27fb2703bbd239f425c7417dbab5a59bd3150a8e71bd993e9" gracePeriod=30 Jan 31 04:10:59 crc kubenswrapper[4667]: I0131 04:10:59.444763 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27ea74e3-7a69-49f6-9bb0-2ccb5f64971f" containerName="ceilometer-notification-agent" containerID="cri-o://15b01ff7de5eaf63c13a0855e5d28dc639ce10af2ea78aa019afbb0274e79bef" gracePeriod=30 Jan 31 04:10:59 crc kubenswrapper[4667]: I0131 04:10:59.445036 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27ea74e3-7a69-49f6-9bb0-2ccb5f64971f" containerName="proxy-httpd" containerID="cri-o://c84a5c48601f0984c2480dee8470e96b21aa2471e1eafac74da7ecc786ffbc25" gracePeriod=30 Jan 31 04:10:59 crc kubenswrapper[4667]: I0131 04:10:59.662812 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 04:10:59 crc kubenswrapper[4667]: I0131 04:10:59.664495 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321" containerName="nova-api-log" containerID="cri-o://06ba30b7dc4b9df257d4934133689541dba29a5f5be72926d7637d51318a18df" gracePeriod=30 Jan 31 04:10:59 crc kubenswrapper[4667]: I0131 04:10:59.664610 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321" containerName="nova-api-api" containerID="cri-o://976a086def1af0f85f05cb073f62a2d56f8a4de5aa409120a71687681df43cec" gracePeriod=30 Jan 31 04:11:00 crc kubenswrapper[4667]: I0131 04:11:00.117303 4667 generic.go:334] "Generic (PLEG): container finished" podID="1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321" containerID="06ba30b7dc4b9df257d4934133689541dba29a5f5be72926d7637d51318a18df" exitCode=143 Jan 31 04:11:00 crc kubenswrapper[4667]: I0131 04:11:00.117366 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321","Type":"ContainerDied","Data":"06ba30b7dc4b9df257d4934133689541dba29a5f5be72926d7637d51318a18df"} Jan 31 04:11:00 crc kubenswrapper[4667]: I0131 04:11:00.118795 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-zq45f" event={"ID":"3110271e-39e9-431e-a5dd-880758179c6c","Type":"ContainerStarted","Data":"966be1fbd42996f3a22e285c0682af2d33ea60b5652cb469a3dc2a2b9a75e8c5"} Jan 31 04:11:00 crc kubenswrapper[4667]: I0131 04:11:00.120354 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-zq45f" Jan 31 04:11:00 crc kubenswrapper[4667]: I0131 04:11:00.122418 4667 generic.go:334] "Generic (PLEG): container finished" podID="27ea74e3-7a69-49f6-9bb0-2ccb5f64971f" containerID="c84a5c48601f0984c2480dee8470e96b21aa2471e1eafac74da7ecc786ffbc25" exitCode=0 Jan 31 04:11:00 crc kubenswrapper[4667]: I0131 04:11:00.122444 4667 generic.go:334] "Generic (PLEG): container finished" podID="27ea74e3-7a69-49f6-9bb0-2ccb5f64971f" containerID="6ca3b1af75834ef27fb2703bbd239f425c7417dbab5a59bd3150a8e71bd993e9" exitCode=2 Jan 31 04:11:00 crc kubenswrapper[4667]: I0131 04:11:00.122451 4667 generic.go:334] "Generic (PLEG): container finished" podID="27ea74e3-7a69-49f6-9bb0-2ccb5f64971f" containerID="86e9cca7895c69c2a106eaed4f81e2df2b707dbddf9b0b60c93de799023e38d9" exitCode=0 Jan 31 04:11:00 crc kubenswrapper[4667]: I0131 04:11:00.122476 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f","Type":"ContainerDied","Data":"c84a5c48601f0984c2480dee8470e96b21aa2471e1eafac74da7ecc786ffbc25"} Jan 31 04:11:00 crc kubenswrapper[4667]: I0131 04:11:00.122506 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f","Type":"ContainerDied","Data":"6ca3b1af75834ef27fb2703bbd239f425c7417dbab5a59bd3150a8e71bd993e9"} Jan 31 04:11:00 crc kubenswrapper[4667]: I0131 04:11:00.122518 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f","Type":"ContainerDied","Data":"86e9cca7895c69c2a106eaed4f81e2df2b707dbddf9b0b60c93de799023e38d9"} Jan 31 04:11:00 crc kubenswrapper[4667]: I0131 04:11:00.148780 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-zq45f" podStartSLOduration=4.14875484 podStartE2EDuration="4.14875484s" podCreationTimestamp="2026-01-31 04:10:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:11:00.143171642 +0000 UTC m=+1383.659506941" watchObservedRunningTime="2026-01-31 04:11:00.14875484 +0000 UTC m=+1383.665090129" Jan 31 04:11:00 crc kubenswrapper[4667]: I0131 04:11:00.429125 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:11:00 crc kubenswrapper[4667]: I0131 04:11:00.450317 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:11:01 crc kubenswrapper[4667]: I0131 04:11:01.149198 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 31 04:11:01 crc kubenswrapper[4667]: I0131 04:11:01.384639 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-67h7k"] Jan 31 04:11:01 crc kubenswrapper[4667]: I0131 04:11:01.386239 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-67h7k" Jan 31 04:11:01 crc kubenswrapper[4667]: I0131 04:11:01.389607 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 31 04:11:01 crc kubenswrapper[4667]: I0131 04:11:01.401424 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-67h7k"] Jan 31 04:11:01 crc kubenswrapper[4667]: I0131 04:11:01.402319 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 31 04:11:01 crc kubenswrapper[4667]: I0131 04:11:01.528793 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9676c6cd-275c-4aaa-86b6-cdcca7df370e-config-data\") pod \"nova-cell1-cell-mapping-67h7k\" (UID: \"9676c6cd-275c-4aaa-86b6-cdcca7df370e\") " pod="openstack/nova-cell1-cell-mapping-67h7k" Jan 31 04:11:01 crc kubenswrapper[4667]: I0131 04:11:01.528998 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fghmr\" (UniqueName: \"kubernetes.io/projected/9676c6cd-275c-4aaa-86b6-cdcca7df370e-kube-api-access-fghmr\") pod \"nova-cell1-cell-mapping-67h7k\" (UID: \"9676c6cd-275c-4aaa-86b6-cdcca7df370e\") " pod="openstack/nova-cell1-cell-mapping-67h7k" Jan 31 04:11:01 crc kubenswrapper[4667]: I0131 04:11:01.529103 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9676c6cd-275c-4aaa-86b6-cdcca7df370e-scripts\") pod \"nova-cell1-cell-mapping-67h7k\" (UID: \"9676c6cd-275c-4aaa-86b6-cdcca7df370e\") " pod="openstack/nova-cell1-cell-mapping-67h7k" Jan 31 04:11:01 crc kubenswrapper[4667]: I0131 04:11:01.529163 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9676c6cd-275c-4aaa-86b6-cdcca7df370e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-67h7k\" (UID: \"9676c6cd-275c-4aaa-86b6-cdcca7df370e\") " pod="openstack/nova-cell1-cell-mapping-67h7k" Jan 31 04:11:01 crc kubenswrapper[4667]: I0131 04:11:01.631410 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fghmr\" (UniqueName: \"kubernetes.io/projected/9676c6cd-275c-4aaa-86b6-cdcca7df370e-kube-api-access-fghmr\") pod \"nova-cell1-cell-mapping-67h7k\" (UID: \"9676c6cd-275c-4aaa-86b6-cdcca7df370e\") " pod="openstack/nova-cell1-cell-mapping-67h7k" Jan 31 04:11:01 crc kubenswrapper[4667]: I0131 04:11:01.631536 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9676c6cd-275c-4aaa-86b6-cdcca7df370e-scripts\") pod \"nova-cell1-cell-mapping-67h7k\" (UID: \"9676c6cd-275c-4aaa-86b6-cdcca7df370e\") " pod="openstack/nova-cell1-cell-mapping-67h7k" Jan 31 04:11:01 crc kubenswrapper[4667]: I0131 04:11:01.631583 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9676c6cd-275c-4aaa-86b6-cdcca7df370e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-67h7k\" (UID: \"9676c6cd-275c-4aaa-86b6-cdcca7df370e\") " pod="openstack/nova-cell1-cell-mapping-67h7k" Jan 31 04:11:01 crc kubenswrapper[4667]: I0131 04:11:01.631639 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9676c6cd-275c-4aaa-86b6-cdcca7df370e-config-data\") pod \"nova-cell1-cell-mapping-67h7k\" (UID: \"9676c6cd-275c-4aaa-86b6-cdcca7df370e\") " pod="openstack/nova-cell1-cell-mapping-67h7k" Jan 31 04:11:01 crc kubenswrapper[4667]: I0131 04:11:01.639698 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9676c6cd-275c-4aaa-86b6-cdcca7df370e-scripts\") pod \"nova-cell1-cell-mapping-67h7k\" (UID: \"9676c6cd-275c-4aaa-86b6-cdcca7df370e\") " pod="openstack/nova-cell1-cell-mapping-67h7k" Jan 31 04:11:01 crc kubenswrapper[4667]: I0131 04:11:01.641682 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9676c6cd-275c-4aaa-86b6-cdcca7df370e-config-data\") pod \"nova-cell1-cell-mapping-67h7k\" (UID: \"9676c6cd-275c-4aaa-86b6-cdcca7df370e\") " pod="openstack/nova-cell1-cell-mapping-67h7k" Jan 31 04:11:01 crc kubenswrapper[4667]: I0131 04:11:01.642046 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9676c6cd-275c-4aaa-86b6-cdcca7df370e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-67h7k\" (UID: \"9676c6cd-275c-4aaa-86b6-cdcca7df370e\") " pod="openstack/nova-cell1-cell-mapping-67h7k" Jan 31 04:11:01 crc kubenswrapper[4667]: I0131 04:11:01.653937 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fghmr\" (UniqueName: \"kubernetes.io/projected/9676c6cd-275c-4aaa-86b6-cdcca7df370e-kube-api-access-fghmr\") pod \"nova-cell1-cell-mapping-67h7k\" (UID: \"9676c6cd-275c-4aaa-86b6-cdcca7df370e\") " pod="openstack/nova-cell1-cell-mapping-67h7k" Jan 31 04:11:01 crc kubenswrapper[4667]: I0131 04:11:01.732622 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-67h7k" Jan 31 04:11:02 crc kubenswrapper[4667]: I0131 04:11:02.238455 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-67h7k"] Jan 31 04:11:02 crc kubenswrapper[4667]: W0131 04:11:02.240746 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9676c6cd_275c_4aaa_86b6_cdcca7df370e.slice/crio-d368e9ab92aa6b536523c7382d02760a75881711d5378a0c96ab6713d906c3c4 WatchSource:0}: Error finding container d368e9ab92aa6b536523c7382d02760a75881711d5378a0c96ab6713d906c3c4: Status 404 returned error can't find the container with id d368e9ab92aa6b536523c7382d02760a75881711d5378a0c96ab6713d906c3c4 Jan 31 04:11:03 crc kubenswrapper[4667]: I0131 04:11:03.172994 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-67h7k" event={"ID":"9676c6cd-275c-4aaa-86b6-cdcca7df370e","Type":"ContainerStarted","Data":"6fdabdeaf9bb42c59ece3a6e37bd433f307b6b07f41f108176a8a37e0018d820"} Jan 31 04:11:03 crc kubenswrapper[4667]: I0131 04:11:03.173564 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-67h7k" event={"ID":"9676c6cd-275c-4aaa-86b6-cdcca7df370e","Type":"ContainerStarted","Data":"d368e9ab92aa6b536523c7382d02760a75881711d5378a0c96ab6713d906c3c4"} Jan 31 04:11:03 crc kubenswrapper[4667]: I0131 04:11:03.182059 4667 generic.go:334] "Generic (PLEG): container finished" podID="1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321" containerID="976a086def1af0f85f05cb073f62a2d56f8a4de5aa409120a71687681df43cec" exitCode=0 Jan 31 04:11:03 crc kubenswrapper[4667]: I0131 04:11:03.182183 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321","Type":"ContainerDied","Data":"976a086def1af0f85f05cb073f62a2d56f8a4de5aa409120a71687681df43cec"} Jan 31 04:11:03 crc kubenswrapper[4667]: I0131 04:11:03.185698 4667 generic.go:334] "Generic (PLEG): container finished" podID="27ea74e3-7a69-49f6-9bb0-2ccb5f64971f" containerID="15b01ff7de5eaf63c13a0855e5d28dc639ce10af2ea78aa019afbb0274e79bef" exitCode=0 Jan 31 04:11:03 crc kubenswrapper[4667]: I0131 04:11:03.185749 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f","Type":"ContainerDied","Data":"15b01ff7de5eaf63c13a0855e5d28dc639ce10af2ea78aa019afbb0274e79bef"} Jan 31 04:11:03 crc kubenswrapper[4667]: I0131 04:11:03.200850 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-67h7k" podStartSLOduration=2.200805642 podStartE2EDuration="2.200805642s" podCreationTimestamp="2026-01-31 04:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:11:03.19430222 +0000 UTC m=+1386.710637529" watchObservedRunningTime="2026-01-31 04:11:03.200805642 +0000 UTC m=+1386.717140941" Jan 31 04:11:03 crc kubenswrapper[4667]: I0131 04:11:03.476878 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 04:11:03 crc kubenswrapper[4667]: I0131 04:11:03.486220 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:11:03 crc kubenswrapper[4667]: I0131 04:11:03.587094 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7qfg\" (UniqueName: \"kubernetes.io/projected/1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321-kube-api-access-c7qfg\") pod \"1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321\" (UID: \"1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321\") " Jan 31 04:11:03 crc kubenswrapper[4667]: I0131 04:11:03.587162 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-ceilometer-tls-certs\") pod \"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f\" (UID: \"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f\") " Jan 31 04:11:03 crc kubenswrapper[4667]: I0131 04:11:03.587280 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-log-httpd\") pod \"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f\" (UID: \"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f\") " Jan 31 04:11:03 crc kubenswrapper[4667]: I0131 04:11:03.587303 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-run-httpd\") pod \"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f\" (UID: \"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f\") " Jan 31 04:11:03 crc kubenswrapper[4667]: I0131 04:11:03.587334 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-sg-core-conf-yaml\") pod \"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f\" (UID: \"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f\") " Jan 31 04:11:03 crc kubenswrapper[4667]: I0131 04:11:03.587428 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321-config-data\") pod \"1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321\" (UID: \"1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321\") " Jan 31 04:11:03 crc kubenswrapper[4667]: I0131 04:11:03.587456 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321-combined-ca-bundle\") pod \"1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321\" (UID: \"1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321\") " Jan 31 04:11:03 crc kubenswrapper[4667]: I0131 04:11:03.587483 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321-logs\") pod \"1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321\" (UID: \"1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321\") " Jan 31 04:11:03 crc kubenswrapper[4667]: I0131 04:11:03.587505 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-config-data\") pod \"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f\" (UID: \"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f\") " Jan 31 04:11:03 crc kubenswrapper[4667]: I0131 04:11:03.587606 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgmns\" (UniqueName: \"kubernetes.io/projected/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-kube-api-access-hgmns\") pod \"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f\" (UID: \"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f\") " Jan 31 04:11:03 crc kubenswrapper[4667]: I0131 04:11:03.587650 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-combined-ca-bundle\") pod \"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f\" (UID: \"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f\") " Jan 31 04:11:03 crc kubenswrapper[4667]: I0131 04:11:03.587683 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-scripts\") pod \"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f\" (UID: \"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f\") " Jan 31 04:11:03 crc kubenswrapper[4667]: I0131 04:11:03.611781 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "27ea74e3-7a69-49f6-9bb0-2ccb5f64971f" (UID: "27ea74e3-7a69-49f6-9bb0-2ccb5f64971f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:11:03 crc kubenswrapper[4667]: I0131 04:11:03.616783 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "27ea74e3-7a69-49f6-9bb0-2ccb5f64971f" (UID: "27ea74e3-7a69-49f6-9bb0-2ccb5f64971f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:11:03 crc kubenswrapper[4667]: I0131 04:11:03.617474 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321-logs" (OuterVolumeSpecName: "logs") pod "1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321" (UID: "1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:11:03 crc kubenswrapper[4667]: I0131 04:11:03.653438 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-kube-api-access-hgmns" (OuterVolumeSpecName: "kube-api-access-hgmns") pod "27ea74e3-7a69-49f6-9bb0-2ccb5f64971f" (UID: "27ea74e3-7a69-49f6-9bb0-2ccb5f64971f"). InnerVolumeSpecName "kube-api-access-hgmns". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:11:03 crc kubenswrapper[4667]: I0131 04:11:03.659204 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-scripts" (OuterVolumeSpecName: "scripts") pod "27ea74e3-7a69-49f6-9bb0-2ccb5f64971f" (UID: "27ea74e3-7a69-49f6-9bb0-2ccb5f64971f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:11:03 crc kubenswrapper[4667]: I0131 04:11:03.681630 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321-kube-api-access-c7qfg" (OuterVolumeSpecName: "kube-api-access-c7qfg") pod "1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321" (UID: "1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321"). InnerVolumeSpecName "kube-api-access-c7qfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:11:03 crc kubenswrapper[4667]: I0131 04:11:03.695573 4667 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 04:11:03 crc kubenswrapper[4667]: I0131 04:11:03.695615 4667 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 04:11:03 crc kubenswrapper[4667]: I0131 04:11:03.695626 4667 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321-logs\") on node \"crc\" DevicePath \"\"" Jan 31 04:11:03 crc kubenswrapper[4667]: I0131 04:11:03.695634 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgmns\" (UniqueName: \"kubernetes.io/projected/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-kube-api-access-hgmns\") on node \"crc\" DevicePath \"\"" Jan 31 04:11:03 crc kubenswrapper[4667]: I0131 04:11:03.695646 4667 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:11:03 crc kubenswrapper[4667]: I0131 04:11:03.695657 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7qfg\" (UniqueName: \"kubernetes.io/projected/1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321-kube-api-access-c7qfg\") on node \"crc\" DevicePath \"\"" Jan 31 04:11:03 crc kubenswrapper[4667]: I0131 04:11:03.716593 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321-config-data" (OuterVolumeSpecName: "config-data") pod "1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321" (UID: "1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:11:03 crc kubenswrapper[4667]: I0131 04:11:03.765442 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321" (UID: "1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:11:03 crc kubenswrapper[4667]: I0131 04:11:03.802384 4667 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:11:03 crc kubenswrapper[4667]: I0131 04:11:03.802427 4667 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:11:03 crc kubenswrapper[4667]: I0131 04:11:03.814030 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "27ea74e3-7a69-49f6-9bb0-2ccb5f64971f" (UID: "27ea74e3-7a69-49f6-9bb0-2ccb5f64971f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:11:03 crc kubenswrapper[4667]: I0131 04:11:03.842625 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "27ea74e3-7a69-49f6-9bb0-2ccb5f64971f" (UID: "27ea74e3-7a69-49f6-9bb0-2ccb5f64971f"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:11:03 crc kubenswrapper[4667]: I0131 04:11:03.867863 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-config-data" (OuterVolumeSpecName: "config-data") pod "27ea74e3-7a69-49f6-9bb0-2ccb5f64971f" (UID: "27ea74e3-7a69-49f6-9bb0-2ccb5f64971f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:11:03 crc kubenswrapper[4667]: I0131 04:11:03.887922 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27ea74e3-7a69-49f6-9bb0-2ccb5f64971f" (UID: "27ea74e3-7a69-49f6-9bb0-2ccb5f64971f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:11:03 crc kubenswrapper[4667]: I0131 04:11:03.909090 4667 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 31 04:11:03 crc kubenswrapper[4667]: I0131 04:11:03.909133 4667 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:11:03 crc kubenswrapper[4667]: I0131 04:11:03.909142 4667 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:11:03 crc kubenswrapper[4667]: I0131 04:11:03.909153 4667 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.196985 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321","Type":"ContainerDied","Data":"78ae642d2e19289dacda4db5b2e86b00d838df02780fd8d1414dcf684b6b8107"} Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.197043 4667 scope.go:117] "RemoveContainer" containerID="976a086def1af0f85f05cb073f62a2d56f8a4de5aa409120a71687681df43cec" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.197171 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.210146 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.211376 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27ea74e3-7a69-49f6-9bb0-2ccb5f64971f","Type":"ContainerDied","Data":"97bd6bdf24e1610941c300633f3e1745cfcb8388f36367d7b75b7a2db4aee4fa"} Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.232806 4667 scope.go:117] "RemoveContainer" containerID="06ba30b7dc4b9df257d4934133689541dba29a5f5be72926d7637d51318a18df" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.259376 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.265794 4667 scope.go:117] "RemoveContainer" containerID="c84a5c48601f0984c2480dee8470e96b21aa2471e1eafac74da7ecc786ffbc25" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.277032 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.313081 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.318950 4667 scope.go:117] "RemoveContainer" containerID="6ca3b1af75834ef27fb2703bbd239f425c7417dbab5a59bd3150a8e71bd993e9" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.347953 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.374432 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:11:04 crc kubenswrapper[4667]: E0131 04:11:04.375127 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27ea74e3-7a69-49f6-9bb0-2ccb5f64971f" containerName="ceilometer-central-agent" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.375146 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="27ea74e3-7a69-49f6-9bb0-2ccb5f64971f" containerName="ceilometer-central-agent" Jan 31 04:11:04 crc kubenswrapper[4667]: E0131 04:11:04.375197 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321" containerName="nova-api-api" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.375205 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321" containerName="nova-api-api" Jan 31 04:11:04 crc kubenswrapper[4667]: E0131 04:11:04.375224 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27ea74e3-7a69-49f6-9bb0-2ccb5f64971f" containerName="ceilometer-notification-agent" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.375231 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="27ea74e3-7a69-49f6-9bb0-2ccb5f64971f" containerName="ceilometer-notification-agent" Jan 31 04:11:04 crc kubenswrapper[4667]: E0131 04:11:04.375241 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27ea74e3-7a69-49f6-9bb0-2ccb5f64971f" containerName="sg-core" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.375247 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="27ea74e3-7a69-49f6-9bb0-2ccb5f64971f" containerName="sg-core" Jan 31 04:11:04 crc kubenswrapper[4667]: E0131 04:11:04.375267 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27ea74e3-7a69-49f6-9bb0-2ccb5f64971f" containerName="proxy-httpd" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.375273 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="27ea74e3-7a69-49f6-9bb0-2ccb5f64971f" containerName="proxy-httpd" Jan 31 04:11:04 crc kubenswrapper[4667]: E0131 04:11:04.375284 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321" containerName="nova-api-log" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.375290 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321" containerName="nova-api-log" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.375516 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="27ea74e3-7a69-49f6-9bb0-2ccb5f64971f" containerName="ceilometer-notification-agent" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.375543 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="27ea74e3-7a69-49f6-9bb0-2ccb5f64971f" containerName="proxy-httpd" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.375558 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="27ea74e3-7a69-49f6-9bb0-2ccb5f64971f" containerName="ceilometer-central-agent" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.375587 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="27ea74e3-7a69-49f6-9bb0-2ccb5f64971f" containerName="sg-core" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.375600 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321" containerName="nova-api-log" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.375607 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321" containerName="nova-api-api" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.377635 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.383152 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.383355 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.383375 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.392291 4667 scope.go:117] "RemoveContainer" containerID="15b01ff7de5eaf63c13a0855e5d28dc639ce10af2ea78aa019afbb0274e79bef" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.421887 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.434875 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.437342 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.442209 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.442392 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.446678 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.447296 4667 scope.go:117] "RemoveContainer" containerID="86e9cca7895c69c2a106eaed4f81e2df2b707dbddf9b0b60c93de799023e38d9" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.465639 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.525958 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef1c8a6a-c6c2-451b-9030-9689f2ed116f-run-httpd\") pod \"ceilometer-0\" (UID: \"ef1c8a6a-c6c2-451b-9030-9689f2ed116f\") " pod="openstack/ceilometer-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.526652 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef1c8a6a-c6c2-451b-9030-9689f2ed116f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ef1c8a6a-c6c2-451b-9030-9689f2ed116f\") " pod="openstack/ceilometer-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.526768 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dadd46c7-ac31-4495-a0d9-449dc5f63e5c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"dadd46c7-ac31-4495-a0d9-449dc5f63e5c\") " pod="openstack/nova-api-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.527012 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef1c8a6a-c6c2-451b-9030-9689f2ed116f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ef1c8a6a-c6c2-451b-9030-9689f2ed116f\") " pod="openstack/ceilometer-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.527151 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dadd46c7-ac31-4495-a0d9-449dc5f63e5c-logs\") pod \"nova-api-0\" (UID: \"dadd46c7-ac31-4495-a0d9-449dc5f63e5c\") " pod="openstack/nova-api-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.527261 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef1c8a6a-c6c2-451b-9030-9689f2ed116f-config-data\") pod \"ceilometer-0\" (UID: \"ef1c8a6a-c6c2-451b-9030-9689f2ed116f\") " pod="openstack/ceilometer-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.527356 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs9sc\" (UniqueName: \"kubernetes.io/projected/dadd46c7-ac31-4495-a0d9-449dc5f63e5c-kube-api-access-zs9sc\") pod \"nova-api-0\" (UID: \"dadd46c7-ac31-4495-a0d9-449dc5f63e5c\") " pod="openstack/nova-api-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.527450 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb6xh\" (UniqueName: \"kubernetes.io/projected/ef1c8a6a-c6c2-451b-9030-9689f2ed116f-kube-api-access-xb6xh\") pod \"ceilometer-0\" (UID: \"ef1c8a6a-c6c2-451b-9030-9689f2ed116f\") " pod="openstack/ceilometer-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.527664 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dadd46c7-ac31-4495-a0d9-449dc5f63e5c-config-data\") pod \"nova-api-0\" (UID: \"dadd46c7-ac31-4495-a0d9-449dc5f63e5c\") " pod="openstack/nova-api-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.527889 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dadd46c7-ac31-4495-a0d9-449dc5f63e5c-public-tls-certs\") pod \"nova-api-0\" (UID: \"dadd46c7-ac31-4495-a0d9-449dc5f63e5c\") " pod="openstack/nova-api-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.527931 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dadd46c7-ac31-4495-a0d9-449dc5f63e5c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dadd46c7-ac31-4495-a0d9-449dc5f63e5c\") " pod="openstack/nova-api-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.528007 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef1c8a6a-c6c2-451b-9030-9689f2ed116f-scripts\") pod \"ceilometer-0\" (UID: \"ef1c8a6a-c6c2-451b-9030-9689f2ed116f\") " pod="openstack/ceilometer-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.528126 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef1c8a6a-c6c2-451b-9030-9689f2ed116f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ef1c8a6a-c6c2-451b-9030-9689f2ed116f\") " pod="openstack/ceilometer-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.528274 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef1c8a6a-c6c2-451b-9030-9689f2ed116f-log-httpd\") pod \"ceilometer-0\" (UID: \"ef1c8a6a-c6c2-451b-9030-9689f2ed116f\") " pod="openstack/ceilometer-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.630969 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef1c8a6a-c6c2-451b-9030-9689f2ed116f-run-httpd\") pod \"ceilometer-0\" (UID: \"ef1c8a6a-c6c2-451b-9030-9689f2ed116f\") " pod="openstack/ceilometer-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.631020 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef1c8a6a-c6c2-451b-9030-9689f2ed116f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ef1c8a6a-c6c2-451b-9030-9689f2ed116f\") " pod="openstack/ceilometer-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.631042 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dadd46c7-ac31-4495-a0d9-449dc5f63e5c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"dadd46c7-ac31-4495-a0d9-449dc5f63e5c\") " pod="openstack/nova-api-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.631094 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef1c8a6a-c6c2-451b-9030-9689f2ed116f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ef1c8a6a-c6c2-451b-9030-9689f2ed116f\") " pod="openstack/ceilometer-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.631124 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dadd46c7-ac31-4495-a0d9-449dc5f63e5c-logs\") pod \"nova-api-0\" (UID: \"dadd46c7-ac31-4495-a0d9-449dc5f63e5c\") " pod="openstack/nova-api-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.631139 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef1c8a6a-c6c2-451b-9030-9689f2ed116f-config-data\") pod \"ceilometer-0\" (UID: \"ef1c8a6a-c6c2-451b-9030-9689f2ed116f\") " pod="openstack/ceilometer-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.631164 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs9sc\" (UniqueName: \"kubernetes.io/projected/dadd46c7-ac31-4495-a0d9-449dc5f63e5c-kube-api-access-zs9sc\") pod \"nova-api-0\" (UID: \"dadd46c7-ac31-4495-a0d9-449dc5f63e5c\") " pod="openstack/nova-api-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.631188 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb6xh\" (UniqueName: \"kubernetes.io/projected/ef1c8a6a-c6c2-451b-9030-9689f2ed116f-kube-api-access-xb6xh\") pod \"ceilometer-0\" (UID: \"ef1c8a6a-c6c2-451b-9030-9689f2ed116f\") " pod="openstack/ceilometer-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.631215 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dadd46c7-ac31-4495-a0d9-449dc5f63e5c-config-data\") pod \"nova-api-0\" (UID: \"dadd46c7-ac31-4495-a0d9-449dc5f63e5c\") " pod="openstack/nova-api-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.631247 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dadd46c7-ac31-4495-a0d9-449dc5f63e5c-public-tls-certs\") pod \"nova-api-0\" (UID: \"dadd46c7-ac31-4495-a0d9-449dc5f63e5c\") " pod="openstack/nova-api-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.631264 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dadd46c7-ac31-4495-a0d9-449dc5f63e5c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dadd46c7-ac31-4495-a0d9-449dc5f63e5c\") " pod="openstack/nova-api-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.631285 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef1c8a6a-c6c2-451b-9030-9689f2ed116f-scripts\") pod \"ceilometer-0\" (UID: \"ef1c8a6a-c6c2-451b-9030-9689f2ed116f\") " pod="openstack/ceilometer-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.631309 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef1c8a6a-c6c2-451b-9030-9689f2ed116f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ef1c8a6a-c6c2-451b-9030-9689f2ed116f\") " pod="openstack/ceilometer-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.631335 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef1c8a6a-c6c2-451b-9030-9689f2ed116f-log-httpd\") pod \"ceilometer-0\" (UID: \"ef1c8a6a-c6c2-451b-9030-9689f2ed116f\") " pod="openstack/ceilometer-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.631890 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef1c8a6a-c6c2-451b-9030-9689f2ed116f-log-httpd\") pod \"ceilometer-0\" (UID: \"ef1c8a6a-c6c2-451b-9030-9689f2ed116f\") " pod="openstack/ceilometer-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.632163 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef1c8a6a-c6c2-451b-9030-9689f2ed116f-run-httpd\") pod \"ceilometer-0\" (UID: \"ef1c8a6a-c6c2-451b-9030-9689f2ed116f\") " pod="openstack/ceilometer-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.633615 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dadd46c7-ac31-4495-a0d9-449dc5f63e5c-logs\") pod \"nova-api-0\" (UID: \"dadd46c7-ac31-4495-a0d9-449dc5f63e5c\") " pod="openstack/nova-api-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.639905 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef1c8a6a-c6c2-451b-9030-9689f2ed116f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ef1c8a6a-c6c2-451b-9030-9689f2ed116f\") " pod="openstack/ceilometer-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.641076 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dadd46c7-ac31-4495-a0d9-449dc5f63e5c-config-data\") pod \"nova-api-0\" (UID: \"dadd46c7-ac31-4495-a0d9-449dc5f63e5c\") " pod="openstack/nova-api-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.642797 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef1c8a6a-c6c2-451b-9030-9689f2ed116f-scripts\") pod \"ceilometer-0\" (UID: \"ef1c8a6a-c6c2-451b-9030-9689f2ed116f\") " pod="openstack/ceilometer-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.643324 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dadd46c7-ac31-4495-a0d9-449dc5f63e5c-public-tls-certs\") pod \"nova-api-0\" (UID: \"dadd46c7-ac31-4495-a0d9-449dc5f63e5c\") " pod="openstack/nova-api-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.647773 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef1c8a6a-c6c2-451b-9030-9689f2ed116f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ef1c8a6a-c6c2-451b-9030-9689f2ed116f\") " pod="openstack/ceilometer-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.648721 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef1c8a6a-c6c2-451b-9030-9689f2ed116f-config-data\") pod \"ceilometer-0\" (UID: \"ef1c8a6a-c6c2-451b-9030-9689f2ed116f\") " pod="openstack/ceilometer-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.649366 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dadd46c7-ac31-4495-a0d9-449dc5f63e5c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dadd46c7-ac31-4495-a0d9-449dc5f63e5c\") " pod="openstack/nova-api-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.653296 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dadd46c7-ac31-4495-a0d9-449dc5f63e5c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"dadd46c7-ac31-4495-a0d9-449dc5f63e5c\") " pod="openstack/nova-api-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.655187 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef1c8a6a-c6c2-451b-9030-9689f2ed116f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ef1c8a6a-c6c2-451b-9030-9689f2ed116f\") " pod="openstack/ceilometer-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.664272 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb6xh\" (UniqueName: \"kubernetes.io/projected/ef1c8a6a-c6c2-451b-9030-9689f2ed116f-kube-api-access-xb6xh\") pod \"ceilometer-0\" (UID: \"ef1c8a6a-c6c2-451b-9030-9689f2ed116f\") " pod="openstack/ceilometer-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.664476 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs9sc\" (UniqueName: \"kubernetes.io/projected/dadd46c7-ac31-4495-a0d9-449dc5f63e5c-kube-api-access-zs9sc\") pod \"nova-api-0\" (UID: \"dadd46c7-ac31-4495-a0d9-449dc5f63e5c\") " pod="openstack/nova-api-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.709836 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 31 04:11:04 crc kubenswrapper[4667]: I0131 04:11:04.770432 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 04:11:05 crc kubenswrapper[4667]: I0131 04:11:05.297473 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321" path="/var/lib/kubelet/pods/1c7127a4-f6e5-4c2c-bd1d-6203ad9fb321/volumes" Jan 31 04:11:05 crc kubenswrapper[4667]: I0131 04:11:05.299377 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27ea74e3-7a69-49f6-9bb0-2ccb5f64971f" path="/var/lib/kubelet/pods/27ea74e3-7a69-49f6-9bb0-2ccb5f64971f/volumes" Jan 31 04:11:05 crc kubenswrapper[4667]: I0131 04:11:05.311579 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 31 04:11:05 crc kubenswrapper[4667]: I0131 04:11:05.421254 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-86c748c4d6-2grmh" Jan 31 04:11:05 crc kubenswrapper[4667]: I0131 04:11:05.453771 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 04:11:05 crc kubenswrapper[4667]: W0131 04:11:05.466179 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddadd46c7_ac31_4495_a0d9_449dc5f63e5c.slice/crio-883de3a4e2a2260cfd7bc5368a63c59f99a3817718de9275661b0e2c9821358c WatchSource:0}: Error finding container 883de3a4e2a2260cfd7bc5368a63c59f99a3817718de9275661b0e2c9821358c: Status 404 returned error can't find the container with id 883de3a4e2a2260cfd7bc5368a63c59f99a3817718de9275661b0e2c9821358c Jan 31 04:11:05 crc kubenswrapper[4667]: I0131 04:11:05.502706 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-78789d8f44-5trmc" Jan 31 04:11:06 crc kubenswrapper[4667]: I0131 04:11:06.238894 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dadd46c7-ac31-4495-a0d9-449dc5f63e5c","Type":"ContainerStarted","Data":"4a6837c367c4eb8beb155425a6bbf1c46d0062122a064ac226fad2a2663fb402"} Jan 31 04:11:06 crc kubenswrapper[4667]: I0131 04:11:06.239384 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dadd46c7-ac31-4495-a0d9-449dc5f63e5c","Type":"ContainerStarted","Data":"a510de275163510ae036357fc8d3a67b9cc9f3cdb25e54ff0bed2fc590ee5ae2"} Jan 31 04:11:06 crc kubenswrapper[4667]: I0131 04:11:06.239398 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dadd46c7-ac31-4495-a0d9-449dc5f63e5c","Type":"ContainerStarted","Data":"883de3a4e2a2260cfd7bc5368a63c59f99a3817718de9275661b0e2c9821358c"} Jan 31 04:11:06 crc kubenswrapper[4667]: I0131 04:11:06.247152 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef1c8a6a-c6c2-451b-9030-9689f2ed116f","Type":"ContainerStarted","Data":"b10ab0f4c6dd6a48b114fb21ded514bad7936ae2e224a3424d9c7f3f38b06bd6"} Jan 31 04:11:06 crc kubenswrapper[4667]: I0131 04:11:06.247207 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef1c8a6a-c6c2-451b-9030-9689f2ed116f","Type":"ContainerStarted","Data":"e0bb9bb20491f815155d0c39ad2468560a29a86ed14b27f0ea6f310585428e6e"} Jan 31 04:11:06 crc kubenswrapper[4667]: I0131 04:11:06.265217 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.265198111 podStartE2EDuration="2.265198111s" podCreationTimestamp="2026-01-31 04:11:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:11:06.262396977 +0000 UTC m=+1389.778732296" watchObservedRunningTime="2026-01-31 04:11:06.265198111 +0000 UTC m=+1389.781533410" Jan 31 04:11:06 crc kubenswrapper[4667]: I0131 04:11:06.718009 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-zq45f" Jan 31 04:11:06 crc kubenswrapper[4667]: I0131 04:11:06.825937 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-5mh27"] Jan 31 04:11:06 crc kubenswrapper[4667]: I0131 04:11:06.826698 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-5mh27" podUID="06e48307-baba-472e-b0e1-81fa37a6cd22" containerName="dnsmasq-dns" containerID="cri-o://0174cf68decba0f0779df25a81d026bbfeaa050afe5528356881d0b95c728eec" gracePeriod=10 Jan 31 04:11:07 crc kubenswrapper[4667]: I0131 04:11:07.267275 4667 generic.go:334] "Generic (PLEG): container finished" podID="06e48307-baba-472e-b0e1-81fa37a6cd22" containerID="0174cf68decba0f0779df25a81d026bbfeaa050afe5528356881d0b95c728eec" exitCode=0 Jan 31 04:11:07 crc kubenswrapper[4667]: I0131 04:11:07.268565 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-5mh27" event={"ID":"06e48307-baba-472e-b0e1-81fa37a6cd22","Type":"ContainerDied","Data":"0174cf68decba0f0779df25a81d026bbfeaa050afe5528356881d0b95c728eec"} Jan 31 04:11:07 crc kubenswrapper[4667]: I0131 04:11:07.711540 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-5mh27" Jan 31 04:11:07 crc kubenswrapper[4667]: I0131 04:11:07.819999 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/06e48307-baba-472e-b0e1-81fa37a6cd22-ovsdbserver-sb\") pod \"06e48307-baba-472e-b0e1-81fa37a6cd22\" (UID: \"06e48307-baba-472e-b0e1-81fa37a6cd22\") " Jan 31 04:11:07 crc kubenswrapper[4667]: I0131 04:11:07.820127 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06e48307-baba-472e-b0e1-81fa37a6cd22-ovsdbserver-nb\") pod \"06e48307-baba-472e-b0e1-81fa37a6cd22\" (UID: \"06e48307-baba-472e-b0e1-81fa37a6cd22\") " Jan 31 04:11:07 crc kubenswrapper[4667]: I0131 04:11:07.820190 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06e48307-baba-472e-b0e1-81fa37a6cd22-dns-svc\") pod \"06e48307-baba-472e-b0e1-81fa37a6cd22\" (UID: \"06e48307-baba-472e-b0e1-81fa37a6cd22\") " Jan 31 04:11:07 crc kubenswrapper[4667]: I0131 04:11:07.820236 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/06e48307-baba-472e-b0e1-81fa37a6cd22-dns-swift-storage-0\") pod \"06e48307-baba-472e-b0e1-81fa37a6cd22\" (UID: \"06e48307-baba-472e-b0e1-81fa37a6cd22\") " Jan 31 04:11:07 crc kubenswrapper[4667]: I0131 04:11:07.820290 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jp552\" (UniqueName: \"kubernetes.io/projected/06e48307-baba-472e-b0e1-81fa37a6cd22-kube-api-access-jp552\") pod \"06e48307-baba-472e-b0e1-81fa37a6cd22\" (UID: \"06e48307-baba-472e-b0e1-81fa37a6cd22\") " Jan 31 04:11:07 crc kubenswrapper[4667]: I0131 04:11:07.820354 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06e48307-baba-472e-b0e1-81fa37a6cd22-config\") pod \"06e48307-baba-472e-b0e1-81fa37a6cd22\" (UID: \"06e48307-baba-472e-b0e1-81fa37a6cd22\") " Jan 31 04:11:07 crc kubenswrapper[4667]: I0131 04:11:07.836818 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06e48307-baba-472e-b0e1-81fa37a6cd22-kube-api-access-jp552" (OuterVolumeSpecName: "kube-api-access-jp552") pod "06e48307-baba-472e-b0e1-81fa37a6cd22" (UID: "06e48307-baba-472e-b0e1-81fa37a6cd22"). InnerVolumeSpecName "kube-api-access-jp552". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:11:07 crc kubenswrapper[4667]: I0131 04:11:07.924515 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jp552\" (UniqueName: \"kubernetes.io/projected/06e48307-baba-472e-b0e1-81fa37a6cd22-kube-api-access-jp552\") on node \"crc\" DevicePath \"\"" Jan 31 04:11:07 crc kubenswrapper[4667]: I0131 04:11:07.960894 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06e48307-baba-472e-b0e1-81fa37a6cd22-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "06e48307-baba-472e-b0e1-81fa37a6cd22" (UID: "06e48307-baba-472e-b0e1-81fa37a6cd22"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:11:07 crc kubenswrapper[4667]: I0131 04:11:07.966965 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06e48307-baba-472e-b0e1-81fa37a6cd22-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "06e48307-baba-472e-b0e1-81fa37a6cd22" (UID: "06e48307-baba-472e-b0e1-81fa37a6cd22"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:11:08 crc kubenswrapper[4667]: I0131 04:11:08.017311 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06e48307-baba-472e-b0e1-81fa37a6cd22-config" (OuterVolumeSpecName: "config") pod "06e48307-baba-472e-b0e1-81fa37a6cd22" (UID: "06e48307-baba-472e-b0e1-81fa37a6cd22"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:11:08 crc kubenswrapper[4667]: I0131 04:11:08.023228 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06e48307-baba-472e-b0e1-81fa37a6cd22-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "06e48307-baba-472e-b0e1-81fa37a6cd22" (UID: "06e48307-baba-472e-b0e1-81fa37a6cd22"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:11:08 crc kubenswrapper[4667]: I0131 04:11:08.027459 4667 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/06e48307-baba-472e-b0e1-81fa37a6cd22-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 04:11:08 crc kubenswrapper[4667]: I0131 04:11:08.027605 4667 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06e48307-baba-472e-b0e1-81fa37a6cd22-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 04:11:08 crc kubenswrapper[4667]: I0131 04:11:08.027686 4667 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/06e48307-baba-472e-b0e1-81fa37a6cd22-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 31 04:11:08 crc kubenswrapper[4667]: I0131 04:11:08.027782 4667 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06e48307-baba-472e-b0e1-81fa37a6cd22-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:11:08 crc kubenswrapper[4667]: I0131 04:11:08.053627 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06e48307-baba-472e-b0e1-81fa37a6cd22-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "06e48307-baba-472e-b0e1-81fa37a6cd22" (UID: "06e48307-baba-472e-b0e1-81fa37a6cd22"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:11:08 crc kubenswrapper[4667]: I0131 04:11:08.129509 4667 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/06e48307-baba-472e-b0e1-81fa37a6cd22-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 04:11:08 crc kubenswrapper[4667]: I0131 04:11:08.303761 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef1c8a6a-c6c2-451b-9030-9689f2ed116f","Type":"ContainerStarted","Data":"722d0e9d318fd6018fbe5bc9a7520ea443d99c9e6b79c939a4c621033cd0655b"} Jan 31 04:11:08 crc kubenswrapper[4667]: I0131 04:11:08.303822 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef1c8a6a-c6c2-451b-9030-9689f2ed116f","Type":"ContainerStarted","Data":"72784a762c0281a8a2a94aaabc17320dc75cac8a5c89c47a81b2c5fa9528a43a"} Jan 31 04:11:08 crc kubenswrapper[4667]: I0131 04:11:08.306371 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-5mh27" event={"ID":"06e48307-baba-472e-b0e1-81fa37a6cd22","Type":"ContainerDied","Data":"904d20444037e8f5571b43373602986fec06dcd925c77290704fc0b9a9212878"} Jan 31 04:11:08 crc kubenswrapper[4667]: I0131 04:11:08.306525 4667 scope.go:117] "RemoveContainer" containerID="0174cf68decba0f0779df25a81d026bbfeaa050afe5528356881d0b95c728eec" Jan 31 04:11:08 crc kubenswrapper[4667]: I0131 04:11:08.306752 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-5mh27" Jan 31 04:11:08 crc kubenswrapper[4667]: I0131 04:11:08.350644 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-5mh27"] Jan 31 04:11:08 crc kubenswrapper[4667]: I0131 04:11:08.355175 4667 scope.go:117] "RemoveContainer" containerID="ad269841ab9261d1bb0e33c293c60208eafa98ba9c660c5c8130e7539581c581" Jan 31 04:11:08 crc kubenswrapper[4667]: I0131 04:11:08.365797 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-5mh27"] Jan 31 04:11:08 crc kubenswrapper[4667]: I0131 04:11:08.825416 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-86c748c4d6-2grmh" Jan 31 04:11:08 crc kubenswrapper[4667]: I0131 04:11:08.921327 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-78789d8f44-5trmc"] Jan 31 04:11:08 crc kubenswrapper[4667]: I0131 04:11:08.921619 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-78789d8f44-5trmc" podUID="b7f8fd18-06a0-432e-8c17-c9b432b6ca69" containerName="horizon-log" containerID="cri-o://ee19d508369900e40e0fadf4e91e4fb079d0e2cfa86f8523c3b3d7785c2c9dab" gracePeriod=30 Jan 31 04:11:08 crc kubenswrapper[4667]: I0131 04:11:08.921735 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-78789d8f44-5trmc" podUID="b7f8fd18-06a0-432e-8c17-c9b432b6ca69" containerName="horizon" containerID="cri-o://d09258b8f1e4532ac1a5b7d64767a2c240653c4e63ff54849c8729b313b3f8c4" gracePeriod=30 Jan 31 04:11:09 crc kubenswrapper[4667]: I0131 04:11:09.294183 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06e48307-baba-472e-b0e1-81fa37a6cd22" path="/var/lib/kubelet/pods/06e48307-baba-472e-b0e1-81fa37a6cd22/volumes" Jan 31 04:11:11 crc kubenswrapper[4667]: I0131 04:11:11.344692 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef1c8a6a-c6c2-451b-9030-9689f2ed116f","Type":"ContainerStarted","Data":"22b2533311439b61aabe6798cd5afd876b12ed1d6b00ac5ac83d03a6c922ac85"} Jan 31 04:11:11 crc kubenswrapper[4667]: I0131 04:11:11.347141 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 31 04:11:11 crc kubenswrapper[4667]: I0131 04:11:11.378309 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.129693965 podStartE2EDuration="7.378279337s" podCreationTimestamp="2026-01-31 04:11:04 +0000 UTC" firstStartedPulling="2026-01-31 04:11:05.320960671 +0000 UTC m=+1388.837295970" lastFinishedPulling="2026-01-31 04:11:10.569546043 +0000 UTC m=+1394.085881342" observedRunningTime="2026-01-31 04:11:11.375381051 +0000 UTC m=+1394.891716350" watchObservedRunningTime="2026-01-31 04:11:11.378279337 +0000 UTC m=+1394.894614636" Jan 31 04:11:12 crc kubenswrapper[4667]: I0131 04:11:12.359418 4667 generic.go:334] "Generic (PLEG): container finished" podID="9676c6cd-275c-4aaa-86b6-cdcca7df370e" containerID="6fdabdeaf9bb42c59ece3a6e37bd433f307b6b07f41f108176a8a37e0018d820" exitCode=0 Jan 31 04:11:12 crc kubenswrapper[4667]: I0131 04:11:12.361478 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-67h7k" event={"ID":"9676c6cd-275c-4aaa-86b6-cdcca7df370e","Type":"ContainerDied","Data":"6fdabdeaf9bb42c59ece3a6e37bd433f307b6b07f41f108176a8a37e0018d820"} Jan 31 04:11:12 crc kubenswrapper[4667]: I0131 04:11:12.370024 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-78789d8f44-5trmc" podUID="b7f8fd18-06a0-432e-8c17-c9b432b6ca69" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:58494->10.217.0.150:8443: read: connection reset by peer" Jan 31 04:11:12 crc kubenswrapper[4667]: I0131 04:11:12.370740 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-78789d8f44-5trmc" podUID="b7f8fd18-06a0-432e-8c17-c9b432b6ca69" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 31 04:11:12 crc kubenswrapper[4667]: I0131 04:11:12.371789 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-78789d8f44-5trmc" podUID="b7f8fd18-06a0-432e-8c17-c9b432b6ca69" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 31 04:11:12 crc kubenswrapper[4667]: I0131 04:11:12.473950 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-757b4f8459-5mh27" podUID="06e48307-baba-472e-b0e1-81fa37a6cd22" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.192:5353: i/o timeout" Jan 31 04:11:13 crc kubenswrapper[4667]: I0131 04:11:13.376524 4667 generic.go:334] "Generic (PLEG): container finished" podID="b7f8fd18-06a0-432e-8c17-c9b432b6ca69" containerID="d09258b8f1e4532ac1a5b7d64767a2c240653c4e63ff54849c8729b313b3f8c4" exitCode=0 Jan 31 04:11:13 crc kubenswrapper[4667]: I0131 04:11:13.376597 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78789d8f44-5trmc" event={"ID":"b7f8fd18-06a0-432e-8c17-c9b432b6ca69","Type":"ContainerDied","Data":"d09258b8f1e4532ac1a5b7d64767a2c240653c4e63ff54849c8729b313b3f8c4"} Jan 31 04:11:13 crc kubenswrapper[4667]: I0131 04:11:13.376659 4667 scope.go:117] "RemoveContainer" containerID="856c0d14a9c006eba9b5acda21554d0a1e3d38398546c6f05a23d35e0977b245" Jan 31 04:11:13 crc kubenswrapper[4667]: I0131 04:11:13.908442 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-67h7k" Jan 31 04:11:14 crc kubenswrapper[4667]: I0131 04:11:14.039305 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9676c6cd-275c-4aaa-86b6-cdcca7df370e-scripts\") pod \"9676c6cd-275c-4aaa-86b6-cdcca7df370e\" (UID: \"9676c6cd-275c-4aaa-86b6-cdcca7df370e\") " Jan 31 04:11:14 crc kubenswrapper[4667]: I0131 04:11:14.039549 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fghmr\" (UniqueName: \"kubernetes.io/projected/9676c6cd-275c-4aaa-86b6-cdcca7df370e-kube-api-access-fghmr\") pod \"9676c6cd-275c-4aaa-86b6-cdcca7df370e\" (UID: \"9676c6cd-275c-4aaa-86b6-cdcca7df370e\") " Jan 31 04:11:14 crc kubenswrapper[4667]: I0131 04:11:14.039644 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9676c6cd-275c-4aaa-86b6-cdcca7df370e-config-data\") pod \"9676c6cd-275c-4aaa-86b6-cdcca7df370e\" (UID: \"9676c6cd-275c-4aaa-86b6-cdcca7df370e\") " Jan 31 04:11:14 crc kubenswrapper[4667]: I0131 04:11:14.039664 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9676c6cd-275c-4aaa-86b6-cdcca7df370e-combined-ca-bundle\") pod \"9676c6cd-275c-4aaa-86b6-cdcca7df370e\" (UID: \"9676c6cd-275c-4aaa-86b6-cdcca7df370e\") " Jan 31 04:11:14 crc kubenswrapper[4667]: I0131 04:11:14.048792 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9676c6cd-275c-4aaa-86b6-cdcca7df370e-kube-api-access-fghmr" (OuterVolumeSpecName: "kube-api-access-fghmr") pod "9676c6cd-275c-4aaa-86b6-cdcca7df370e" (UID: "9676c6cd-275c-4aaa-86b6-cdcca7df370e"). InnerVolumeSpecName "kube-api-access-fghmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:11:14 crc kubenswrapper[4667]: I0131 04:11:14.050496 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9676c6cd-275c-4aaa-86b6-cdcca7df370e-scripts" (OuterVolumeSpecName: "scripts") pod "9676c6cd-275c-4aaa-86b6-cdcca7df370e" (UID: "9676c6cd-275c-4aaa-86b6-cdcca7df370e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:11:14 crc kubenswrapper[4667]: I0131 04:11:14.083445 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9676c6cd-275c-4aaa-86b6-cdcca7df370e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9676c6cd-275c-4aaa-86b6-cdcca7df370e" (UID: "9676c6cd-275c-4aaa-86b6-cdcca7df370e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:11:14 crc kubenswrapper[4667]: I0131 04:11:14.090869 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9676c6cd-275c-4aaa-86b6-cdcca7df370e-config-data" (OuterVolumeSpecName: "config-data") pod "9676c6cd-275c-4aaa-86b6-cdcca7df370e" (UID: "9676c6cd-275c-4aaa-86b6-cdcca7df370e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:11:14 crc kubenswrapper[4667]: I0131 04:11:14.142681 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fghmr\" (UniqueName: \"kubernetes.io/projected/9676c6cd-275c-4aaa-86b6-cdcca7df370e-kube-api-access-fghmr\") on node \"crc\" DevicePath \"\"" Jan 31 04:11:14 crc kubenswrapper[4667]: I0131 04:11:14.142856 4667 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9676c6cd-275c-4aaa-86b6-cdcca7df370e-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:11:14 crc kubenswrapper[4667]: I0131 04:11:14.142947 4667 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9676c6cd-275c-4aaa-86b6-cdcca7df370e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:11:14 crc kubenswrapper[4667]: I0131 04:11:14.143029 4667 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9676c6cd-275c-4aaa-86b6-cdcca7df370e-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:11:14 crc kubenswrapper[4667]: I0131 04:11:14.391743 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-67h7k" event={"ID":"9676c6cd-275c-4aaa-86b6-cdcca7df370e","Type":"ContainerDied","Data":"d368e9ab92aa6b536523c7382d02760a75881711d5378a0c96ab6713d906c3c4"} Jan 31 04:11:14 crc kubenswrapper[4667]: I0131 04:11:14.393406 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d368e9ab92aa6b536523c7382d02760a75881711d5378a0c96ab6713d906c3c4" Jan 31 04:11:14 crc kubenswrapper[4667]: I0131 04:11:14.391799 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-67h7k" Jan 31 04:11:14 crc kubenswrapper[4667]: I0131 04:11:14.612551 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 04:11:14 crc kubenswrapper[4667]: I0131 04:11:14.613360 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="dadd46c7-ac31-4495-a0d9-449dc5f63e5c" containerName="nova-api-log" containerID="cri-o://a510de275163510ae036357fc8d3a67b9cc9f3cdb25e54ff0bed2fc590ee5ae2" gracePeriod=30 Jan 31 04:11:14 crc kubenswrapper[4667]: I0131 04:11:14.613731 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="dadd46c7-ac31-4495-a0d9-449dc5f63e5c" containerName="nova-api-api" containerID="cri-o://4a6837c367c4eb8beb155425a6bbf1c46d0062122a064ac226fad2a2663fb402" gracePeriod=30 Jan 31 04:11:14 crc kubenswrapper[4667]: I0131 04:11:14.654664 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 04:11:14 crc kubenswrapper[4667]: I0131 04:11:14.655139 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2" containerName="nova-scheduler-scheduler" containerID="cri-o://67ba812a3dbb1bddb983d786ecb4a9f2a74e9a86a95066e5d8fe48e58923a1bd" gracePeriod=30 Jan 31 04:11:14 crc kubenswrapper[4667]: I0131 04:11:14.727936 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 04:11:14 crc kubenswrapper[4667]: I0131 04:11:14.728411 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="cc940175-88a1-4c91-bb97-1c72a27560b7" containerName="nova-metadata-log" containerID="cri-o://a4d4783cf317b5b57f030b14bd27b29616d92da3626608d2cae928eedaefe55f" gracePeriod=30 Jan 31 04:11:14 crc kubenswrapper[4667]: I0131 04:11:14.728633 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="cc940175-88a1-4c91-bb97-1c72a27560b7" containerName="nova-metadata-metadata" containerID="cri-o://cba9133fdf995faf2d55a7394dd4ff01f165970e7f87e8624b363e2f0908451c" gracePeriod=30 Jan 31 04:11:15 crc kubenswrapper[4667]: I0131 04:11:15.425294 4667 generic.go:334] "Generic (PLEG): container finished" podID="dadd46c7-ac31-4495-a0d9-449dc5f63e5c" containerID="4a6837c367c4eb8beb155425a6bbf1c46d0062122a064ac226fad2a2663fb402" exitCode=0 Jan 31 04:11:15 crc kubenswrapper[4667]: I0131 04:11:15.425344 4667 generic.go:334] "Generic (PLEG): container finished" podID="dadd46c7-ac31-4495-a0d9-449dc5f63e5c" containerID="a510de275163510ae036357fc8d3a67b9cc9f3cdb25e54ff0bed2fc590ee5ae2" exitCode=143 Jan 31 04:11:15 crc kubenswrapper[4667]: I0131 04:11:15.425424 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dadd46c7-ac31-4495-a0d9-449dc5f63e5c","Type":"ContainerDied","Data":"4a6837c367c4eb8beb155425a6bbf1c46d0062122a064ac226fad2a2663fb402"} Jan 31 04:11:15 crc kubenswrapper[4667]: I0131 04:11:15.425480 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dadd46c7-ac31-4495-a0d9-449dc5f63e5c","Type":"ContainerDied","Data":"a510de275163510ae036357fc8d3a67b9cc9f3cdb25e54ff0bed2fc590ee5ae2"} Jan 31 04:11:15 crc kubenswrapper[4667]: E0131 04:11:15.428412 4667 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="67ba812a3dbb1bddb983d786ecb4a9f2a74e9a86a95066e5d8fe48e58923a1bd" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 31 04:11:15 crc kubenswrapper[4667]: E0131 04:11:15.431824 4667 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="67ba812a3dbb1bddb983d786ecb4a9f2a74e9a86a95066e5d8fe48e58923a1bd" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 31 04:11:15 crc kubenswrapper[4667]: I0131 04:11:15.436759 4667 generic.go:334] "Generic (PLEG): container finished" podID="cc940175-88a1-4c91-bb97-1c72a27560b7" containerID="a4d4783cf317b5b57f030b14bd27b29616d92da3626608d2cae928eedaefe55f" exitCode=143 Jan 31 04:11:15 crc kubenswrapper[4667]: I0131 04:11:15.436823 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cc940175-88a1-4c91-bb97-1c72a27560b7","Type":"ContainerDied","Data":"a4d4783cf317b5b57f030b14bd27b29616d92da3626608d2cae928eedaefe55f"} Jan 31 04:11:15 crc kubenswrapper[4667]: E0131 04:11:15.461352 4667 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="67ba812a3dbb1bddb983d786ecb4a9f2a74e9a86a95066e5d8fe48e58923a1bd" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 31 04:11:15 crc kubenswrapper[4667]: E0131 04:11:15.461460 4667 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2" containerName="nova-scheduler-scheduler" Jan 31 04:11:15 crc kubenswrapper[4667]: I0131 04:11:15.831387 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 04:11:15 crc kubenswrapper[4667]: I0131 04:11:15.989211 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dadd46c7-ac31-4495-a0d9-449dc5f63e5c-combined-ca-bundle\") pod \"dadd46c7-ac31-4495-a0d9-449dc5f63e5c\" (UID: \"dadd46c7-ac31-4495-a0d9-449dc5f63e5c\") " Jan 31 04:11:15 crc kubenswrapper[4667]: I0131 04:11:15.989585 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dadd46c7-ac31-4495-a0d9-449dc5f63e5c-logs\") pod \"dadd46c7-ac31-4495-a0d9-449dc5f63e5c\" (UID: \"dadd46c7-ac31-4495-a0d9-449dc5f63e5c\") " Jan 31 04:11:15 crc kubenswrapper[4667]: I0131 04:11:15.989801 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dadd46c7-ac31-4495-a0d9-449dc5f63e5c-internal-tls-certs\") pod \"dadd46c7-ac31-4495-a0d9-449dc5f63e5c\" (UID: \"dadd46c7-ac31-4495-a0d9-449dc5f63e5c\") " Jan 31 04:11:15 crc kubenswrapper[4667]: I0131 04:11:15.989923 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zs9sc\" (UniqueName: \"kubernetes.io/projected/dadd46c7-ac31-4495-a0d9-449dc5f63e5c-kube-api-access-zs9sc\") pod \"dadd46c7-ac31-4495-a0d9-449dc5f63e5c\" (UID: \"dadd46c7-ac31-4495-a0d9-449dc5f63e5c\") " Jan 31 04:11:15 crc kubenswrapper[4667]: I0131 04:11:15.989975 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dadd46c7-ac31-4495-a0d9-449dc5f63e5c-public-tls-certs\") pod \"dadd46c7-ac31-4495-a0d9-449dc5f63e5c\" (UID: \"dadd46c7-ac31-4495-a0d9-449dc5f63e5c\") " Jan 31 04:11:15 crc kubenswrapper[4667]: I0131 04:11:15.990044 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dadd46c7-ac31-4495-a0d9-449dc5f63e5c-config-data\") pod \"dadd46c7-ac31-4495-a0d9-449dc5f63e5c\" (UID: \"dadd46c7-ac31-4495-a0d9-449dc5f63e5c\") " Jan 31 04:11:15 crc kubenswrapper[4667]: I0131 04:11:15.990487 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dadd46c7-ac31-4495-a0d9-449dc5f63e5c-logs" (OuterVolumeSpecName: "logs") pod "dadd46c7-ac31-4495-a0d9-449dc5f63e5c" (UID: "dadd46c7-ac31-4495-a0d9-449dc5f63e5c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:11:15 crc kubenswrapper[4667]: I0131 04:11:15.991115 4667 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dadd46c7-ac31-4495-a0d9-449dc5f63e5c-logs\") on node \"crc\" DevicePath \"\"" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.020665 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dadd46c7-ac31-4495-a0d9-449dc5f63e5c-kube-api-access-zs9sc" (OuterVolumeSpecName: "kube-api-access-zs9sc") pod "dadd46c7-ac31-4495-a0d9-449dc5f63e5c" (UID: "dadd46c7-ac31-4495-a0d9-449dc5f63e5c"). InnerVolumeSpecName "kube-api-access-zs9sc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.065754 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dadd46c7-ac31-4495-a0d9-449dc5f63e5c-config-data" (OuterVolumeSpecName: "config-data") pod "dadd46c7-ac31-4495-a0d9-449dc5f63e5c" (UID: "dadd46c7-ac31-4495-a0d9-449dc5f63e5c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.071569 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dadd46c7-ac31-4495-a0d9-449dc5f63e5c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "dadd46c7-ac31-4495-a0d9-449dc5f63e5c" (UID: "dadd46c7-ac31-4495-a0d9-449dc5f63e5c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.075594 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dadd46c7-ac31-4495-a0d9-449dc5f63e5c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dadd46c7-ac31-4495-a0d9-449dc5f63e5c" (UID: "dadd46c7-ac31-4495-a0d9-449dc5f63e5c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.083411 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dadd46c7-ac31-4495-a0d9-449dc5f63e5c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "dadd46c7-ac31-4495-a0d9-449dc5f63e5c" (UID: "dadd46c7-ac31-4495-a0d9-449dc5f63e5c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.093099 4667 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dadd46c7-ac31-4495-a0d9-449dc5f63e5c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.093138 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zs9sc\" (UniqueName: \"kubernetes.io/projected/dadd46c7-ac31-4495-a0d9-449dc5f63e5c-kube-api-access-zs9sc\") on node \"crc\" DevicePath \"\"" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.093152 4667 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dadd46c7-ac31-4495-a0d9-449dc5f63e5c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.093163 4667 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dadd46c7-ac31-4495-a0d9-449dc5f63e5c-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.093175 4667 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dadd46c7-ac31-4495-a0d9-449dc5f63e5c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.451262 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dadd46c7-ac31-4495-a0d9-449dc5f63e5c","Type":"ContainerDied","Data":"883de3a4e2a2260cfd7bc5368a63c59f99a3817718de9275661b0e2c9821358c"} Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.451351 4667 scope.go:117] "RemoveContainer" containerID="4a6837c367c4eb8beb155425a6bbf1c46d0062122a064ac226fad2a2663fb402" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.451575 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.511108 4667 scope.go:117] "RemoveContainer" containerID="a510de275163510ae036357fc8d3a67b9cc9f3cdb25e54ff0bed2fc590ee5ae2" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.525721 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.550892 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.560689 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 31 04:11:16 crc kubenswrapper[4667]: E0131 04:11:16.561318 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9676c6cd-275c-4aaa-86b6-cdcca7df370e" containerName="nova-manage" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.561340 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="9676c6cd-275c-4aaa-86b6-cdcca7df370e" containerName="nova-manage" Jan 31 04:11:16 crc kubenswrapper[4667]: E0131 04:11:16.561363 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06e48307-baba-472e-b0e1-81fa37a6cd22" containerName="dnsmasq-dns" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.561371 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="06e48307-baba-472e-b0e1-81fa37a6cd22" containerName="dnsmasq-dns" Jan 31 04:11:16 crc kubenswrapper[4667]: E0131 04:11:16.561410 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dadd46c7-ac31-4495-a0d9-449dc5f63e5c" containerName="nova-api-api" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.561420 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="dadd46c7-ac31-4495-a0d9-449dc5f63e5c" containerName="nova-api-api" Jan 31 04:11:16 crc kubenswrapper[4667]: E0131 04:11:16.561436 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06e48307-baba-472e-b0e1-81fa37a6cd22" containerName="init" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.561442 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="06e48307-baba-472e-b0e1-81fa37a6cd22" containerName="init" Jan 31 04:11:16 crc kubenswrapper[4667]: E0131 04:11:16.561460 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dadd46c7-ac31-4495-a0d9-449dc5f63e5c" containerName="nova-api-log" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.561466 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="dadd46c7-ac31-4495-a0d9-449dc5f63e5c" containerName="nova-api-log" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.561664 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="dadd46c7-ac31-4495-a0d9-449dc5f63e5c" containerName="nova-api-log" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.561699 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="9676c6cd-275c-4aaa-86b6-cdcca7df370e" containerName="nova-manage" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.561712 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="06e48307-baba-472e-b0e1-81fa37a6cd22" containerName="dnsmasq-dns" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.561726 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="dadd46c7-ac31-4495-a0d9-449dc5f63e5c" containerName="nova-api-api" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.563060 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.567438 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.569372 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.570024 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.591397 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.615639 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/162d25d8-8fbe-4a52-808b-971f2017bfc0-logs\") pod \"nova-api-0\" (UID: \"162d25d8-8fbe-4a52-808b-971f2017bfc0\") " pod="openstack/nova-api-0" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.615749 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/162d25d8-8fbe-4a52-808b-971f2017bfc0-config-data\") pod \"nova-api-0\" (UID: \"162d25d8-8fbe-4a52-808b-971f2017bfc0\") " pod="openstack/nova-api-0" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.615799 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/162d25d8-8fbe-4a52-808b-971f2017bfc0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"162d25d8-8fbe-4a52-808b-971f2017bfc0\") " pod="openstack/nova-api-0" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.615909 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/162d25d8-8fbe-4a52-808b-971f2017bfc0-public-tls-certs\") pod \"nova-api-0\" (UID: \"162d25d8-8fbe-4a52-808b-971f2017bfc0\") " pod="openstack/nova-api-0" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.616050 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtz6x\" (UniqueName: \"kubernetes.io/projected/162d25d8-8fbe-4a52-808b-971f2017bfc0-kube-api-access-qtz6x\") pod \"nova-api-0\" (UID: \"162d25d8-8fbe-4a52-808b-971f2017bfc0\") " pod="openstack/nova-api-0" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.616092 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/162d25d8-8fbe-4a52-808b-971f2017bfc0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"162d25d8-8fbe-4a52-808b-971f2017bfc0\") " pod="openstack/nova-api-0" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.718176 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/162d25d8-8fbe-4a52-808b-971f2017bfc0-logs\") pod \"nova-api-0\" (UID: \"162d25d8-8fbe-4a52-808b-971f2017bfc0\") " pod="openstack/nova-api-0" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.718259 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/162d25d8-8fbe-4a52-808b-971f2017bfc0-config-data\") pod \"nova-api-0\" (UID: \"162d25d8-8fbe-4a52-808b-971f2017bfc0\") " pod="openstack/nova-api-0" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.718299 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/162d25d8-8fbe-4a52-808b-971f2017bfc0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"162d25d8-8fbe-4a52-808b-971f2017bfc0\") " pod="openstack/nova-api-0" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.718360 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/162d25d8-8fbe-4a52-808b-971f2017bfc0-public-tls-certs\") pod \"nova-api-0\" (UID: \"162d25d8-8fbe-4a52-808b-971f2017bfc0\") " pod="openstack/nova-api-0" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.718399 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtz6x\" (UniqueName: \"kubernetes.io/projected/162d25d8-8fbe-4a52-808b-971f2017bfc0-kube-api-access-qtz6x\") pod \"nova-api-0\" (UID: \"162d25d8-8fbe-4a52-808b-971f2017bfc0\") " pod="openstack/nova-api-0" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.718423 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/162d25d8-8fbe-4a52-808b-971f2017bfc0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"162d25d8-8fbe-4a52-808b-971f2017bfc0\") " pod="openstack/nova-api-0" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.720399 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/162d25d8-8fbe-4a52-808b-971f2017bfc0-logs\") pod \"nova-api-0\" (UID: \"162d25d8-8fbe-4a52-808b-971f2017bfc0\") " pod="openstack/nova-api-0" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.724520 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/162d25d8-8fbe-4a52-808b-971f2017bfc0-public-tls-certs\") pod \"nova-api-0\" (UID: \"162d25d8-8fbe-4a52-808b-971f2017bfc0\") " pod="openstack/nova-api-0" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.724730 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/162d25d8-8fbe-4a52-808b-971f2017bfc0-config-data\") pod \"nova-api-0\" (UID: \"162d25d8-8fbe-4a52-808b-971f2017bfc0\") " pod="openstack/nova-api-0" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.725026 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/162d25d8-8fbe-4a52-808b-971f2017bfc0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"162d25d8-8fbe-4a52-808b-971f2017bfc0\") " pod="openstack/nova-api-0" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.725186 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/162d25d8-8fbe-4a52-808b-971f2017bfc0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"162d25d8-8fbe-4a52-808b-971f2017bfc0\") " pod="openstack/nova-api-0" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.749828 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtz6x\" (UniqueName: \"kubernetes.io/projected/162d25d8-8fbe-4a52-808b-971f2017bfc0-kube-api-access-qtz6x\") pod \"nova-api-0\" (UID: \"162d25d8-8fbe-4a52-808b-971f2017bfc0\") " pod="openstack/nova-api-0" Jan 31 04:11:16 crc kubenswrapper[4667]: I0131 04:11:16.890664 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 31 04:11:17 crc kubenswrapper[4667]: I0131 04:11:17.294522 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dadd46c7-ac31-4495-a0d9-449dc5f63e5c" path="/var/lib/kubelet/pods/dadd46c7-ac31-4495-a0d9-449dc5f63e5c/volumes" Jan 31 04:11:17 crc kubenswrapper[4667]: I0131 04:11:17.373694 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 31 04:11:17 crc kubenswrapper[4667]: W0131 04:11:17.383371 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod162d25d8_8fbe_4a52_808b_971f2017bfc0.slice/crio-095506383cc747b0fed4cf504222c8129ab998be9a60d92a5a8dc6bc3c6b22a2 WatchSource:0}: Error finding container 095506383cc747b0fed4cf504222c8129ab998be9a60d92a5a8dc6bc3c6b22a2: Status 404 returned error can't find the container with id 095506383cc747b0fed4cf504222c8129ab998be9a60d92a5a8dc6bc3c6b22a2 Jan 31 04:11:17 crc kubenswrapper[4667]: I0131 04:11:17.466129 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"162d25d8-8fbe-4a52-808b-971f2017bfc0","Type":"ContainerStarted","Data":"095506383cc747b0fed4cf504222c8129ab998be9a60d92a5a8dc6bc3c6b22a2"} Jan 31 04:11:18 crc kubenswrapper[4667]: I0131 04:11:18.288476 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="cc940175-88a1-4c91-bb97-1c72a27560b7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": dial tcp 10.217.0.199:8775: connect: connection refused" Jan 31 04:11:18 crc kubenswrapper[4667]: I0131 04:11:18.288476 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="cc940175-88a1-4c91-bb97-1c72a27560b7" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": dial tcp 10.217.0.199:8775: connect: connection refused" Jan 31 04:11:18 crc kubenswrapper[4667]: I0131 04:11:18.486582 4667 generic.go:334] "Generic (PLEG): container finished" podID="cc940175-88a1-4c91-bb97-1c72a27560b7" containerID="cba9133fdf995faf2d55a7394dd4ff01f165970e7f87e8624b363e2f0908451c" exitCode=0 Jan 31 04:11:18 crc kubenswrapper[4667]: I0131 04:11:18.487136 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cc940175-88a1-4c91-bb97-1c72a27560b7","Type":"ContainerDied","Data":"cba9133fdf995faf2d55a7394dd4ff01f165970e7f87e8624b363e2f0908451c"} Jan 31 04:11:18 crc kubenswrapper[4667]: I0131 04:11:18.489675 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"162d25d8-8fbe-4a52-808b-971f2017bfc0","Type":"ContainerStarted","Data":"e3d446b92ae34402ba73c7aba9472095d165fb6b87733106a23b3e30847bba74"} Jan 31 04:11:18 crc kubenswrapper[4667]: I0131 04:11:18.489697 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"162d25d8-8fbe-4a52-808b-971f2017bfc0","Type":"ContainerStarted","Data":"508ffb9e82c607ba769989f27a1981198290cdc543ff78b36d8f67350f068dbf"} Jan 31 04:11:18 crc kubenswrapper[4667]: I0131 04:11:18.537084 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.537064488 podStartE2EDuration="2.537064488s" podCreationTimestamp="2026-01-31 04:11:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:11:18.530653779 +0000 UTC m=+1402.046989078" watchObservedRunningTime="2026-01-31 04:11:18.537064488 +0000 UTC m=+1402.053399777" Jan 31 04:11:18 crc kubenswrapper[4667]: I0131 04:11:18.740559 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 04:11:18 crc kubenswrapper[4667]: I0131 04:11:18.876934 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc940175-88a1-4c91-bb97-1c72a27560b7-combined-ca-bundle\") pod \"cc940175-88a1-4c91-bb97-1c72a27560b7\" (UID: \"cc940175-88a1-4c91-bb97-1c72a27560b7\") " Jan 31 04:11:18 crc kubenswrapper[4667]: I0131 04:11:18.877008 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tcx7\" (UniqueName: \"kubernetes.io/projected/cc940175-88a1-4c91-bb97-1c72a27560b7-kube-api-access-8tcx7\") pod \"cc940175-88a1-4c91-bb97-1c72a27560b7\" (UID: \"cc940175-88a1-4c91-bb97-1c72a27560b7\") " Jan 31 04:11:18 crc kubenswrapper[4667]: I0131 04:11:18.877091 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc940175-88a1-4c91-bb97-1c72a27560b7-logs\") pod \"cc940175-88a1-4c91-bb97-1c72a27560b7\" (UID: \"cc940175-88a1-4c91-bb97-1c72a27560b7\") " Jan 31 04:11:18 crc kubenswrapper[4667]: I0131 04:11:18.877184 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc940175-88a1-4c91-bb97-1c72a27560b7-config-data\") pod \"cc940175-88a1-4c91-bb97-1c72a27560b7\" (UID: \"cc940175-88a1-4c91-bb97-1c72a27560b7\") " Jan 31 04:11:18 crc kubenswrapper[4667]: I0131 04:11:18.877292 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc940175-88a1-4c91-bb97-1c72a27560b7-nova-metadata-tls-certs\") pod \"cc940175-88a1-4c91-bb97-1c72a27560b7\" (UID: \"cc940175-88a1-4c91-bb97-1c72a27560b7\") " Jan 31 04:11:18 crc kubenswrapper[4667]: I0131 04:11:18.878641 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc940175-88a1-4c91-bb97-1c72a27560b7-logs" (OuterVolumeSpecName: "logs") pod "cc940175-88a1-4c91-bb97-1c72a27560b7" (UID: "cc940175-88a1-4c91-bb97-1c72a27560b7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:11:18 crc kubenswrapper[4667]: I0131 04:11:18.927119 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc940175-88a1-4c91-bb97-1c72a27560b7-kube-api-access-8tcx7" (OuterVolumeSpecName: "kube-api-access-8tcx7") pod "cc940175-88a1-4c91-bb97-1c72a27560b7" (UID: "cc940175-88a1-4c91-bb97-1c72a27560b7"). InnerVolumeSpecName "kube-api-access-8tcx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:11:19 crc kubenswrapper[4667]: I0131 04:11:19.007569 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tcx7\" (UniqueName: \"kubernetes.io/projected/cc940175-88a1-4c91-bb97-1c72a27560b7-kube-api-access-8tcx7\") on node \"crc\" DevicePath \"\"" Jan 31 04:11:19 crc kubenswrapper[4667]: I0131 04:11:19.008095 4667 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc940175-88a1-4c91-bb97-1c72a27560b7-logs\") on node \"crc\" DevicePath \"\"" Jan 31 04:11:19 crc kubenswrapper[4667]: I0131 04:11:19.036374 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc940175-88a1-4c91-bb97-1c72a27560b7-config-data" (OuterVolumeSpecName: "config-data") pod "cc940175-88a1-4c91-bb97-1c72a27560b7" (UID: "cc940175-88a1-4c91-bb97-1c72a27560b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:11:19 crc kubenswrapper[4667]: I0131 04:11:19.123818 4667 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc940175-88a1-4c91-bb97-1c72a27560b7-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:11:19 crc kubenswrapper[4667]: I0131 04:11:19.151448 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc940175-88a1-4c91-bb97-1c72a27560b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc940175-88a1-4c91-bb97-1c72a27560b7" (UID: "cc940175-88a1-4c91-bb97-1c72a27560b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:11:19 crc kubenswrapper[4667]: I0131 04:11:19.225716 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc940175-88a1-4c91-bb97-1c72a27560b7-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "cc940175-88a1-4c91-bb97-1c72a27560b7" (UID: "cc940175-88a1-4c91-bb97-1c72a27560b7"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:11:19 crc kubenswrapper[4667]: I0131 04:11:19.228512 4667 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc940175-88a1-4c91-bb97-1c72a27560b7-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:11:19 crc kubenswrapper[4667]: I0131 04:11:19.228549 4667 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc940175-88a1-4c91-bb97-1c72a27560b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:11:19 crc kubenswrapper[4667]: I0131 04:11:19.518933 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cc940175-88a1-4c91-bb97-1c72a27560b7","Type":"ContainerDied","Data":"bf1b0877a81418e165365f2daa87f4edbba8653f22ee6f0a0199af1e855ba15b"} Jan 31 04:11:19 crc kubenswrapper[4667]: I0131 04:11:19.519006 4667 scope.go:117] "RemoveContainer" containerID="cba9133fdf995faf2d55a7394dd4ff01f165970e7f87e8624b363e2f0908451c" Jan 31 04:11:19 crc kubenswrapper[4667]: E0131 04:11:19.519777 4667 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc940175_88a1_4c91_bb97_1c72a27560b7.slice/crio-bf1b0877a81418e165365f2daa87f4edbba8653f22ee6f0a0199af1e855ba15b\": RecentStats: unable to find data in memory cache]" Jan 31 04:11:19 crc kubenswrapper[4667]: I0131 04:11:19.520042 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 04:11:19 crc kubenswrapper[4667]: I0131 04:11:19.561014 4667 scope.go:117] "RemoveContainer" containerID="a4d4783cf317b5b57f030b14bd27b29616d92da3626608d2cae928eedaefe55f" Jan 31 04:11:19 crc kubenswrapper[4667]: I0131 04:11:19.568243 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 04:11:19 crc kubenswrapper[4667]: I0131 04:11:19.602950 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 04:11:19 crc kubenswrapper[4667]: I0131 04:11:19.613070 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 31 04:11:19 crc kubenswrapper[4667]: E0131 04:11:19.613752 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc940175-88a1-4c91-bb97-1c72a27560b7" containerName="nova-metadata-log" Jan 31 04:11:19 crc kubenswrapper[4667]: I0131 04:11:19.613779 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc940175-88a1-4c91-bb97-1c72a27560b7" containerName="nova-metadata-log" Jan 31 04:11:19 crc kubenswrapper[4667]: E0131 04:11:19.613819 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc940175-88a1-4c91-bb97-1c72a27560b7" containerName="nova-metadata-metadata" Jan 31 04:11:19 crc kubenswrapper[4667]: I0131 04:11:19.613831 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc940175-88a1-4c91-bb97-1c72a27560b7" containerName="nova-metadata-metadata" Jan 31 04:11:19 crc kubenswrapper[4667]: I0131 04:11:19.614077 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc940175-88a1-4c91-bb97-1c72a27560b7" containerName="nova-metadata-log" Jan 31 04:11:19 crc kubenswrapper[4667]: I0131 04:11:19.614110 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc940175-88a1-4c91-bb97-1c72a27560b7" containerName="nova-metadata-metadata" Jan 31 04:11:19 crc kubenswrapper[4667]: I0131 04:11:19.616191 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 04:11:19 crc kubenswrapper[4667]: I0131 04:11:19.621293 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 31 04:11:19 crc kubenswrapper[4667]: I0131 04:11:19.624187 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 04:11:19 crc kubenswrapper[4667]: I0131 04:11:19.626233 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 31 04:11:19 crc kubenswrapper[4667]: I0131 04:11:19.740668 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b91fdcb3-e7f6-40d0-97d1-4db13213d61a-logs\") pod \"nova-metadata-0\" (UID: \"b91fdcb3-e7f6-40d0-97d1-4db13213d61a\") " pod="openstack/nova-metadata-0" Jan 31 04:11:19 crc kubenswrapper[4667]: I0131 04:11:19.740909 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8bl8\" (UniqueName: \"kubernetes.io/projected/b91fdcb3-e7f6-40d0-97d1-4db13213d61a-kube-api-access-l8bl8\") pod \"nova-metadata-0\" (UID: \"b91fdcb3-e7f6-40d0-97d1-4db13213d61a\") " pod="openstack/nova-metadata-0" Jan 31 04:11:19 crc kubenswrapper[4667]: I0131 04:11:19.741237 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b91fdcb3-e7f6-40d0-97d1-4db13213d61a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b91fdcb3-e7f6-40d0-97d1-4db13213d61a\") " pod="openstack/nova-metadata-0" Jan 31 04:11:19 crc kubenswrapper[4667]: I0131 04:11:19.741391 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b91fdcb3-e7f6-40d0-97d1-4db13213d61a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b91fdcb3-e7f6-40d0-97d1-4db13213d61a\") " pod="openstack/nova-metadata-0" Jan 31 04:11:19 crc kubenswrapper[4667]: I0131 04:11:19.741503 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b91fdcb3-e7f6-40d0-97d1-4db13213d61a-config-data\") pod \"nova-metadata-0\" (UID: \"b91fdcb3-e7f6-40d0-97d1-4db13213d61a\") " pod="openstack/nova-metadata-0" Jan 31 04:11:19 crc kubenswrapper[4667]: I0131 04:11:19.843690 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b91fdcb3-e7f6-40d0-97d1-4db13213d61a-logs\") pod \"nova-metadata-0\" (UID: \"b91fdcb3-e7f6-40d0-97d1-4db13213d61a\") " pod="openstack/nova-metadata-0" Jan 31 04:11:19 crc kubenswrapper[4667]: I0131 04:11:19.843792 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bl8\" (UniqueName: \"kubernetes.io/projected/b91fdcb3-e7f6-40d0-97d1-4db13213d61a-kube-api-access-l8bl8\") pod \"nova-metadata-0\" (UID: \"b91fdcb3-e7f6-40d0-97d1-4db13213d61a\") " pod="openstack/nova-metadata-0" Jan 31 04:11:19 crc kubenswrapper[4667]: I0131 04:11:19.843900 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b91fdcb3-e7f6-40d0-97d1-4db13213d61a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b91fdcb3-e7f6-40d0-97d1-4db13213d61a\") " pod="openstack/nova-metadata-0" Jan 31 04:11:19 crc kubenswrapper[4667]: I0131 04:11:19.844819 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b91fdcb3-e7f6-40d0-97d1-4db13213d61a-logs\") pod \"nova-metadata-0\" (UID: \"b91fdcb3-e7f6-40d0-97d1-4db13213d61a\") " pod="openstack/nova-metadata-0" Jan 31 04:11:19 crc kubenswrapper[4667]: I0131 04:11:19.844903 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b91fdcb3-e7f6-40d0-97d1-4db13213d61a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b91fdcb3-e7f6-40d0-97d1-4db13213d61a\") " pod="openstack/nova-metadata-0" Jan 31 04:11:19 crc kubenswrapper[4667]: I0131 04:11:19.844948 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b91fdcb3-e7f6-40d0-97d1-4db13213d61a-config-data\") pod \"nova-metadata-0\" (UID: \"b91fdcb3-e7f6-40d0-97d1-4db13213d61a\") " pod="openstack/nova-metadata-0" Jan 31 04:11:19 crc kubenswrapper[4667]: I0131 04:11:19.851612 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b91fdcb3-e7f6-40d0-97d1-4db13213d61a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b91fdcb3-e7f6-40d0-97d1-4db13213d61a\") " pod="openstack/nova-metadata-0" Jan 31 04:11:19 crc kubenswrapper[4667]: I0131 04:11:19.851629 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b91fdcb3-e7f6-40d0-97d1-4db13213d61a-config-data\") pod \"nova-metadata-0\" (UID: \"b91fdcb3-e7f6-40d0-97d1-4db13213d61a\") " pod="openstack/nova-metadata-0" Jan 31 04:11:19 crc kubenswrapper[4667]: I0131 04:11:19.852335 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b91fdcb3-e7f6-40d0-97d1-4db13213d61a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b91fdcb3-e7f6-40d0-97d1-4db13213d61a\") " pod="openstack/nova-metadata-0" Jan 31 04:11:19 crc kubenswrapper[4667]: I0131 04:11:19.863774 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8bl8\" (UniqueName: \"kubernetes.io/projected/b91fdcb3-e7f6-40d0-97d1-4db13213d61a-kube-api-access-l8bl8\") pod \"nova-metadata-0\" (UID: \"b91fdcb3-e7f6-40d0-97d1-4db13213d61a\") " pod="openstack/nova-metadata-0" Jan 31 04:11:19 crc kubenswrapper[4667]: I0131 04:11:19.944049 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 31 04:11:20 crc kubenswrapper[4667]: I0131 04:11:20.346886 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 04:11:20 crc kubenswrapper[4667]: I0131 04:11:20.458915 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2-combined-ca-bundle\") pod \"0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2\" (UID: \"0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2\") " Jan 31 04:11:20 crc kubenswrapper[4667]: I0131 04:11:20.458991 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2-config-data\") pod \"0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2\" (UID: \"0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2\") " Jan 31 04:11:20 crc kubenswrapper[4667]: I0131 04:11:20.459148 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdm4x\" (UniqueName: \"kubernetes.io/projected/0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2-kube-api-access-tdm4x\") pod \"0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2\" (UID: \"0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2\") " Jan 31 04:11:20 crc kubenswrapper[4667]: I0131 04:11:20.467551 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2-kube-api-access-tdm4x" (OuterVolumeSpecName: "kube-api-access-tdm4x") pod "0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2" (UID: "0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2"). InnerVolumeSpecName "kube-api-access-tdm4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:11:20 crc kubenswrapper[4667]: I0131 04:11:20.537699 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2-config-data" (OuterVolumeSpecName: "config-data") pod "0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2" (UID: "0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:11:20 crc kubenswrapper[4667]: I0131 04:11:20.537947 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2" (UID: "0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:11:20 crc kubenswrapper[4667]: I0131 04:11:20.546169 4667 generic.go:334] "Generic (PLEG): container finished" podID="0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2" containerID="67ba812a3dbb1bddb983d786ecb4a9f2a74e9a86a95066e5d8fe48e58923a1bd" exitCode=0 Jan 31 04:11:20 crc kubenswrapper[4667]: I0131 04:11:20.546221 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2","Type":"ContainerDied","Data":"67ba812a3dbb1bddb983d786ecb4a9f2a74e9a86a95066e5d8fe48e58923a1bd"} Jan 31 04:11:20 crc kubenswrapper[4667]: I0131 04:11:20.546248 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2","Type":"ContainerDied","Data":"0e3def4e763e31de1e874e63d9e1dee7e46d2caffc2c1db967cc3bea21f44a8e"} Jan 31 04:11:20 crc kubenswrapper[4667]: I0131 04:11:20.546268 4667 scope.go:117] "RemoveContainer" containerID="67ba812a3dbb1bddb983d786ecb4a9f2a74e9a86a95066e5d8fe48e58923a1bd" Jan 31 04:11:20 crc kubenswrapper[4667]: I0131 04:11:20.546406 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 04:11:20 crc kubenswrapper[4667]: I0131 04:11:20.562075 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdm4x\" (UniqueName: \"kubernetes.io/projected/0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2-kube-api-access-tdm4x\") on node \"crc\" DevicePath \"\"" Jan 31 04:11:20 crc kubenswrapper[4667]: I0131 04:11:20.562114 4667 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:11:20 crc kubenswrapper[4667]: I0131 04:11:20.562125 4667 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:11:20 crc kubenswrapper[4667]: I0131 04:11:20.598036 4667 scope.go:117] "RemoveContainer" containerID="67ba812a3dbb1bddb983d786ecb4a9f2a74e9a86a95066e5d8fe48e58923a1bd" Jan 31 04:11:20 crc kubenswrapper[4667]: E0131 04:11:20.598959 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67ba812a3dbb1bddb983d786ecb4a9f2a74e9a86a95066e5d8fe48e58923a1bd\": container with ID starting with 67ba812a3dbb1bddb983d786ecb4a9f2a74e9a86a95066e5d8fe48e58923a1bd not found: ID does not exist" containerID="67ba812a3dbb1bddb983d786ecb4a9f2a74e9a86a95066e5d8fe48e58923a1bd" Jan 31 04:11:20 crc kubenswrapper[4667]: I0131 04:11:20.598992 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67ba812a3dbb1bddb983d786ecb4a9f2a74e9a86a95066e5d8fe48e58923a1bd"} err="failed to get container status \"67ba812a3dbb1bddb983d786ecb4a9f2a74e9a86a95066e5d8fe48e58923a1bd\": rpc error: code = NotFound desc = could not find container \"67ba812a3dbb1bddb983d786ecb4a9f2a74e9a86a95066e5d8fe48e58923a1bd\": container with ID starting with 67ba812a3dbb1bddb983d786ecb4a9f2a74e9a86a95066e5d8fe48e58923a1bd not found: ID does not exist" Jan 31 04:11:20 crc kubenswrapper[4667]: I0131 04:11:20.626774 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 31 04:11:20 crc kubenswrapper[4667]: I0131 04:11:20.638811 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 04:11:20 crc kubenswrapper[4667]: I0131 04:11:20.655449 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 04:11:20 crc kubenswrapper[4667]: I0131 04:11:20.688336 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 04:11:20 crc kubenswrapper[4667]: E0131 04:11:20.689782 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2" containerName="nova-scheduler-scheduler" Jan 31 04:11:20 crc kubenswrapper[4667]: I0131 04:11:20.689805 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2" containerName="nova-scheduler-scheduler" Jan 31 04:11:20 crc kubenswrapper[4667]: I0131 04:11:20.690029 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2" containerName="nova-scheduler-scheduler" Jan 31 04:11:20 crc kubenswrapper[4667]: I0131 04:11:20.693181 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 04:11:20 crc kubenswrapper[4667]: I0131 04:11:20.706492 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 31 04:11:20 crc kubenswrapper[4667]: I0131 04:11:20.716948 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 04:11:20 crc kubenswrapper[4667]: I0131 04:11:20.769316 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf1db9a1-f45c-41f0-8d76-2c0318f0299b-config-data\") pod \"nova-scheduler-0\" (UID: \"cf1db9a1-f45c-41f0-8d76-2c0318f0299b\") " pod="openstack/nova-scheduler-0" Jan 31 04:11:20 crc kubenswrapper[4667]: I0131 04:11:20.769432 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf1db9a1-f45c-41f0-8d76-2c0318f0299b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cf1db9a1-f45c-41f0-8d76-2c0318f0299b\") " pod="openstack/nova-scheduler-0" Jan 31 04:11:20 crc kubenswrapper[4667]: I0131 04:11:20.769515 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm5hk\" (UniqueName: \"kubernetes.io/projected/cf1db9a1-f45c-41f0-8d76-2c0318f0299b-kube-api-access-vm5hk\") pod \"nova-scheduler-0\" (UID: \"cf1db9a1-f45c-41f0-8d76-2c0318f0299b\") " pod="openstack/nova-scheduler-0" Jan 31 04:11:20 crc kubenswrapper[4667]: I0131 04:11:20.871557 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm5hk\" (UniqueName: \"kubernetes.io/projected/cf1db9a1-f45c-41f0-8d76-2c0318f0299b-kube-api-access-vm5hk\") pod \"nova-scheduler-0\" (UID: \"cf1db9a1-f45c-41f0-8d76-2c0318f0299b\") " pod="openstack/nova-scheduler-0" Jan 31 04:11:20 crc kubenswrapper[4667]: I0131 04:11:20.871627 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf1db9a1-f45c-41f0-8d76-2c0318f0299b-config-data\") pod \"nova-scheduler-0\" (UID: \"cf1db9a1-f45c-41f0-8d76-2c0318f0299b\") " pod="openstack/nova-scheduler-0" Jan 31 04:11:20 crc kubenswrapper[4667]: I0131 04:11:20.871718 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf1db9a1-f45c-41f0-8d76-2c0318f0299b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cf1db9a1-f45c-41f0-8d76-2c0318f0299b\") " pod="openstack/nova-scheduler-0" Jan 31 04:11:20 crc kubenswrapper[4667]: I0131 04:11:20.884898 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf1db9a1-f45c-41f0-8d76-2c0318f0299b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cf1db9a1-f45c-41f0-8d76-2c0318f0299b\") " pod="openstack/nova-scheduler-0" Jan 31 04:11:20 crc kubenswrapper[4667]: I0131 04:11:20.888683 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm5hk\" (UniqueName: \"kubernetes.io/projected/cf1db9a1-f45c-41f0-8d76-2c0318f0299b-kube-api-access-vm5hk\") pod \"nova-scheduler-0\" (UID: \"cf1db9a1-f45c-41f0-8d76-2c0318f0299b\") " pod="openstack/nova-scheduler-0" Jan 31 04:11:20 crc kubenswrapper[4667]: I0131 04:11:20.889505 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf1db9a1-f45c-41f0-8d76-2c0318f0299b-config-data\") pod \"nova-scheduler-0\" (UID: \"cf1db9a1-f45c-41f0-8d76-2c0318f0299b\") " pod="openstack/nova-scheduler-0" Jan 31 04:11:21 crc kubenswrapper[4667]: I0131 04:11:21.011571 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 31 04:11:21 crc kubenswrapper[4667]: I0131 04:11:21.343340 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2" path="/var/lib/kubelet/pods/0dcf0db8-eeb3-49d3-8e36-a69f48aaf7e2/volumes" Jan 31 04:11:21 crc kubenswrapper[4667]: I0131 04:11:21.348424 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc940175-88a1-4c91-bb97-1c72a27560b7" path="/var/lib/kubelet/pods/cc940175-88a1-4c91-bb97-1c72a27560b7/volumes" Jan 31 04:11:21 crc kubenswrapper[4667]: I0131 04:11:21.558261 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b91fdcb3-e7f6-40d0-97d1-4db13213d61a","Type":"ContainerStarted","Data":"8b8314c49d8215a7d5893ad1e8de8ce32bce31fb42f197df09913bd1ada00e20"} Jan 31 04:11:21 crc kubenswrapper[4667]: I0131 04:11:21.558313 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b91fdcb3-e7f6-40d0-97d1-4db13213d61a","Type":"ContainerStarted","Data":"98e37414ba0dda799e9815b9844185bf83fb6f7a118f04e48604a7c5f12adbbe"} Jan 31 04:11:21 crc kubenswrapper[4667]: I0131 04:11:21.558329 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b91fdcb3-e7f6-40d0-97d1-4db13213d61a","Type":"ContainerStarted","Data":"017893d315e3c58f3e6b2c836d0eac03148cb049b69943e291aa1211d155cc9d"} Jan 31 04:11:21 crc kubenswrapper[4667]: I0131 04:11:21.612950 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 31 04:11:21 crc kubenswrapper[4667]: I0131 04:11:21.617693 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.617669267 podStartE2EDuration="2.617669267s" podCreationTimestamp="2026-01-31 04:11:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:11:21.598042237 +0000 UTC m=+1405.114377536" watchObservedRunningTime="2026-01-31 04:11:21.617669267 +0000 UTC m=+1405.134004566" Jan 31 04:11:21 crc kubenswrapper[4667]: I0131 04:11:21.755477 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-78789d8f44-5trmc" podUID="b7f8fd18-06a0-432e-8c17-c9b432b6ca69" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 31 04:11:22 crc kubenswrapper[4667]: I0131 04:11:22.576684 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cf1db9a1-f45c-41f0-8d76-2c0318f0299b","Type":"ContainerStarted","Data":"8b84715cf4883d689683ae0b97691d2977efc4f0efc74ab9c22c1ef8366ddc7a"} Jan 31 04:11:22 crc kubenswrapper[4667]: I0131 04:11:22.576744 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cf1db9a1-f45c-41f0-8d76-2c0318f0299b","Type":"ContainerStarted","Data":"5d327d5ce0fb79db2b3cad58a4d493a181d9e181b31ffe88d1dd81ba7870bf2f"} Jan 31 04:11:22 crc kubenswrapper[4667]: I0131 04:11:22.599681 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.599657828 podStartE2EDuration="2.599657828s" podCreationTimestamp="2026-01-31 04:11:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:11:22.590625879 +0000 UTC m=+1406.106961178" watchObservedRunningTime="2026-01-31 04:11:22.599657828 +0000 UTC m=+1406.115993127" Jan 31 04:11:24 crc kubenswrapper[4667]: I0131 04:11:24.944411 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 04:11:24 crc kubenswrapper[4667]: I0131 04:11:24.944990 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 31 04:11:26 crc kubenswrapper[4667]: I0131 04:11:26.012087 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 31 04:11:26 crc kubenswrapper[4667]: I0131 04:11:26.893421 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 04:11:26 crc kubenswrapper[4667]: I0131 04:11:26.893489 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 31 04:11:27 crc kubenswrapper[4667]: I0131 04:11:27.904048 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="162d25d8-8fbe-4a52-808b-971f2017bfc0" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 04:11:27 crc kubenswrapper[4667]: I0131 04:11:27.904183 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="162d25d8-8fbe-4a52-808b-971f2017bfc0" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 04:11:29 crc kubenswrapper[4667]: I0131 04:11:29.944297 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 31 04:11:29 crc kubenswrapper[4667]: I0131 04:11:29.944809 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 31 04:11:30 crc kubenswrapper[4667]: I0131 04:11:30.962107 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b91fdcb3-e7f6-40d0-97d1-4db13213d61a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.208:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 04:11:30 crc kubenswrapper[4667]: I0131 04:11:30.962165 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b91fdcb3-e7f6-40d0-97d1-4db13213d61a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.208:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 04:11:31 crc kubenswrapper[4667]: I0131 04:11:31.012122 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 31 04:11:31 crc kubenswrapper[4667]: I0131 04:11:31.186403 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 31 04:11:31 crc kubenswrapper[4667]: I0131 04:11:31.723443 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 31 04:11:31 crc kubenswrapper[4667]: I0131 04:11:31.755424 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-78789d8f44-5trmc" podUID="b7f8fd18-06a0-432e-8c17-c9b432b6ca69" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 31 04:11:34 crc kubenswrapper[4667]: I0131 04:11:34.724545 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 31 04:11:36 crc kubenswrapper[4667]: I0131 04:11:36.913418 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 31 04:11:36 crc kubenswrapper[4667]: I0131 04:11:36.914339 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 31 04:11:36 crc kubenswrapper[4667]: I0131 04:11:36.921592 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 31 04:11:36 crc kubenswrapper[4667]: I0131 04:11:36.923532 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 31 04:11:37 crc kubenswrapper[4667]: I0131 04:11:37.682012 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-phkrx"] Jan 31 04:11:37 crc kubenswrapper[4667]: I0131 04:11:37.685103 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-phkrx" Jan 31 04:11:37 crc kubenswrapper[4667]: I0131 04:11:37.706709 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-phkrx"] Jan 31 04:11:37 crc kubenswrapper[4667]: I0131 04:11:37.759483 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 31 04:11:37 crc kubenswrapper[4667]: I0131 04:11:37.783979 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 31 04:11:37 crc kubenswrapper[4667]: I0131 04:11:37.811587 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1a9dfcd-65ab-4c1e-af87-477f98da17cb-utilities\") pod \"redhat-operators-phkrx\" (UID: \"a1a9dfcd-65ab-4c1e-af87-477f98da17cb\") " pod="openshift-marketplace/redhat-operators-phkrx" Jan 31 04:11:37 crc kubenswrapper[4667]: I0131 04:11:37.812042 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvdzp\" (UniqueName: \"kubernetes.io/projected/a1a9dfcd-65ab-4c1e-af87-477f98da17cb-kube-api-access-bvdzp\") pod \"redhat-operators-phkrx\" (UID: \"a1a9dfcd-65ab-4c1e-af87-477f98da17cb\") " pod="openshift-marketplace/redhat-operators-phkrx" Jan 31 04:11:37 crc kubenswrapper[4667]: I0131 04:11:37.812225 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1a9dfcd-65ab-4c1e-af87-477f98da17cb-catalog-content\") pod \"redhat-operators-phkrx\" (UID: \"a1a9dfcd-65ab-4c1e-af87-477f98da17cb\") " pod="openshift-marketplace/redhat-operators-phkrx" Jan 31 04:11:37 crc kubenswrapper[4667]: I0131 04:11:37.915394 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1a9dfcd-65ab-4c1e-af87-477f98da17cb-utilities\") pod \"redhat-operators-phkrx\" (UID: \"a1a9dfcd-65ab-4c1e-af87-477f98da17cb\") " pod="openshift-marketplace/redhat-operators-phkrx" Jan 31 04:11:37 crc kubenswrapper[4667]: I0131 04:11:37.915562 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvdzp\" (UniqueName: \"kubernetes.io/projected/a1a9dfcd-65ab-4c1e-af87-477f98da17cb-kube-api-access-bvdzp\") pod \"redhat-operators-phkrx\" (UID: \"a1a9dfcd-65ab-4c1e-af87-477f98da17cb\") " pod="openshift-marketplace/redhat-operators-phkrx" Jan 31 04:11:37 crc kubenswrapper[4667]: I0131 04:11:37.915644 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1a9dfcd-65ab-4c1e-af87-477f98da17cb-catalog-content\") pod \"redhat-operators-phkrx\" (UID: \"a1a9dfcd-65ab-4c1e-af87-477f98da17cb\") " pod="openshift-marketplace/redhat-operators-phkrx" Jan 31 04:11:37 crc kubenswrapper[4667]: I0131 04:11:37.916232 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1a9dfcd-65ab-4c1e-af87-477f98da17cb-catalog-content\") pod \"redhat-operators-phkrx\" (UID: \"a1a9dfcd-65ab-4c1e-af87-477f98da17cb\") " pod="openshift-marketplace/redhat-operators-phkrx" Jan 31 04:11:37 crc kubenswrapper[4667]: I0131 04:11:37.916498 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1a9dfcd-65ab-4c1e-af87-477f98da17cb-utilities\") pod \"redhat-operators-phkrx\" (UID: \"a1a9dfcd-65ab-4c1e-af87-477f98da17cb\") " pod="openshift-marketplace/redhat-operators-phkrx" Jan 31 04:11:38 crc kubenswrapper[4667]: I0131 04:11:38.004971 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvdzp\" (UniqueName: \"kubernetes.io/projected/a1a9dfcd-65ab-4c1e-af87-477f98da17cb-kube-api-access-bvdzp\") pod \"redhat-operators-phkrx\" (UID: \"a1a9dfcd-65ab-4c1e-af87-477f98da17cb\") " pod="openshift-marketplace/redhat-operators-phkrx" Jan 31 04:11:38 crc kubenswrapper[4667]: I0131 04:11:38.055005 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-phkrx" Jan 31 04:11:38 crc kubenswrapper[4667]: I0131 04:11:38.632904 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-phkrx"] Jan 31 04:11:38 crc kubenswrapper[4667]: I0131 04:11:38.790907 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-phkrx" event={"ID":"a1a9dfcd-65ab-4c1e-af87-477f98da17cb","Type":"ContainerStarted","Data":"14f08f52ae6f56692b142ddf9b49cb0b7a7ebeb34bf4383aac9d9886e37d0267"} Jan 31 04:11:39 crc kubenswrapper[4667]: I0131 04:11:39.498297 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78789d8f44-5trmc" Jan 31 04:11:39 crc kubenswrapper[4667]: I0131 04:11:39.574858 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b7f8fd18-06a0-432e-8c17-c9b432b6ca69-scripts\") pod \"b7f8fd18-06a0-432e-8c17-c9b432b6ca69\" (UID: \"b7f8fd18-06a0-432e-8c17-c9b432b6ca69\") " Jan 31 04:11:39 crc kubenswrapper[4667]: I0131 04:11:39.574950 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7f8fd18-06a0-432e-8c17-c9b432b6ca69-horizon-tls-certs\") pod \"b7f8fd18-06a0-432e-8c17-c9b432b6ca69\" (UID: \"b7f8fd18-06a0-432e-8c17-c9b432b6ca69\") " Jan 31 04:11:39 crc kubenswrapper[4667]: I0131 04:11:39.574989 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7f8fd18-06a0-432e-8c17-c9b432b6ca69-logs\") pod \"b7f8fd18-06a0-432e-8c17-c9b432b6ca69\" (UID: \"b7f8fd18-06a0-432e-8c17-c9b432b6ca69\") " Jan 31 04:11:39 crc kubenswrapper[4667]: I0131 04:11:39.575126 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b7f8fd18-06a0-432e-8c17-c9b432b6ca69-config-data\") pod \"b7f8fd18-06a0-432e-8c17-c9b432b6ca69\" (UID: \"b7f8fd18-06a0-432e-8c17-c9b432b6ca69\") " Jan 31 04:11:39 crc kubenswrapper[4667]: I0131 04:11:39.575158 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b7f8fd18-06a0-432e-8c17-c9b432b6ca69-horizon-secret-key\") pod \"b7f8fd18-06a0-432e-8c17-c9b432b6ca69\" (UID: \"b7f8fd18-06a0-432e-8c17-c9b432b6ca69\") " Jan 31 04:11:39 crc kubenswrapper[4667]: I0131 04:11:39.575232 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7f8fd18-06a0-432e-8c17-c9b432b6ca69-combined-ca-bundle\") pod \"b7f8fd18-06a0-432e-8c17-c9b432b6ca69\" (UID: \"b7f8fd18-06a0-432e-8c17-c9b432b6ca69\") " Jan 31 04:11:39 crc kubenswrapper[4667]: I0131 04:11:39.575297 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fltqc\" (UniqueName: \"kubernetes.io/projected/b7f8fd18-06a0-432e-8c17-c9b432b6ca69-kube-api-access-fltqc\") pod \"b7f8fd18-06a0-432e-8c17-c9b432b6ca69\" (UID: \"b7f8fd18-06a0-432e-8c17-c9b432b6ca69\") " Jan 31 04:11:39 crc kubenswrapper[4667]: I0131 04:11:39.577810 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7f8fd18-06a0-432e-8c17-c9b432b6ca69-logs" (OuterVolumeSpecName: "logs") pod "b7f8fd18-06a0-432e-8c17-c9b432b6ca69" (UID: "b7f8fd18-06a0-432e-8c17-c9b432b6ca69"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:11:39 crc kubenswrapper[4667]: I0131 04:11:39.587022 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7f8fd18-06a0-432e-8c17-c9b432b6ca69-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "b7f8fd18-06a0-432e-8c17-c9b432b6ca69" (UID: "b7f8fd18-06a0-432e-8c17-c9b432b6ca69"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:11:39 crc kubenswrapper[4667]: I0131 04:11:39.608732 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7f8fd18-06a0-432e-8c17-c9b432b6ca69-config-data" (OuterVolumeSpecName: "config-data") pod "b7f8fd18-06a0-432e-8c17-c9b432b6ca69" (UID: "b7f8fd18-06a0-432e-8c17-c9b432b6ca69"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:11:39 crc kubenswrapper[4667]: I0131 04:11:39.634212 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7f8fd18-06a0-432e-8c17-c9b432b6ca69-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b7f8fd18-06a0-432e-8c17-c9b432b6ca69" (UID: "b7f8fd18-06a0-432e-8c17-c9b432b6ca69"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:11:39 crc kubenswrapper[4667]: I0131 04:11:39.652093 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7f8fd18-06a0-432e-8c17-c9b432b6ca69-kube-api-access-fltqc" (OuterVolumeSpecName: "kube-api-access-fltqc") pod "b7f8fd18-06a0-432e-8c17-c9b432b6ca69" (UID: "b7f8fd18-06a0-432e-8c17-c9b432b6ca69"). InnerVolumeSpecName "kube-api-access-fltqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:11:39 crc kubenswrapper[4667]: I0131 04:11:39.672155 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7f8fd18-06a0-432e-8c17-c9b432b6ca69-scripts" (OuterVolumeSpecName: "scripts") pod "b7f8fd18-06a0-432e-8c17-c9b432b6ca69" (UID: "b7f8fd18-06a0-432e-8c17-c9b432b6ca69"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:11:39 crc kubenswrapper[4667]: I0131 04:11:39.673495 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7f8fd18-06a0-432e-8c17-c9b432b6ca69-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "b7f8fd18-06a0-432e-8c17-c9b432b6ca69" (UID: "b7f8fd18-06a0-432e-8c17-c9b432b6ca69"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:11:39 crc kubenswrapper[4667]: I0131 04:11:39.679758 4667 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7f8fd18-06a0-432e-8c17-c9b432b6ca69-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:11:39 crc kubenswrapper[4667]: I0131 04:11:39.679805 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fltqc\" (UniqueName: \"kubernetes.io/projected/b7f8fd18-06a0-432e-8c17-c9b432b6ca69-kube-api-access-fltqc\") on node \"crc\" DevicePath \"\"" Jan 31 04:11:39 crc kubenswrapper[4667]: I0131 04:11:39.679820 4667 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b7f8fd18-06a0-432e-8c17-c9b432b6ca69-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 04:11:39 crc kubenswrapper[4667]: I0131 04:11:39.679829 4667 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7f8fd18-06a0-432e-8c17-c9b432b6ca69-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:11:39 crc kubenswrapper[4667]: I0131 04:11:39.679862 4667 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7f8fd18-06a0-432e-8c17-c9b432b6ca69-logs\") on node \"crc\" DevicePath \"\"" Jan 31 04:11:39 crc kubenswrapper[4667]: I0131 04:11:39.679871 4667 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b7f8fd18-06a0-432e-8c17-c9b432b6ca69-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:11:39 crc kubenswrapper[4667]: I0131 04:11:39.679879 4667 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b7f8fd18-06a0-432e-8c17-c9b432b6ca69-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 31 04:11:39 crc kubenswrapper[4667]: I0131 04:11:39.803705 4667 generic.go:334] "Generic (PLEG): container finished" podID="b7f8fd18-06a0-432e-8c17-c9b432b6ca69" containerID="ee19d508369900e40e0fadf4e91e4fb079d0e2cfa86f8523c3b3d7785c2c9dab" exitCode=137 Jan 31 04:11:39 crc kubenswrapper[4667]: I0131 04:11:39.803808 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78789d8f44-5trmc" event={"ID":"b7f8fd18-06a0-432e-8c17-c9b432b6ca69","Type":"ContainerDied","Data":"ee19d508369900e40e0fadf4e91e4fb079d0e2cfa86f8523c3b3d7785c2c9dab"} Jan 31 04:11:39 crc kubenswrapper[4667]: I0131 04:11:39.803863 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78789d8f44-5trmc" event={"ID":"b7f8fd18-06a0-432e-8c17-c9b432b6ca69","Type":"ContainerDied","Data":"8ace758f8f2a613c4be26a8aa6c951a1305e5bc3c5dfe51cf0ff083c4782b235"} Jan 31 04:11:39 crc kubenswrapper[4667]: I0131 04:11:39.803885 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78789d8f44-5trmc" Jan 31 04:11:39 crc kubenswrapper[4667]: I0131 04:11:39.803890 4667 scope.go:117] "RemoveContainer" containerID="d09258b8f1e4532ac1a5b7d64767a2c240653c4e63ff54849c8729b313b3f8c4" Jan 31 04:11:39 crc kubenswrapper[4667]: I0131 04:11:39.807500 4667 generic.go:334] "Generic (PLEG): container finished" podID="a1a9dfcd-65ab-4c1e-af87-477f98da17cb" containerID="fa317bd5ee64035f868efc45bd20f6e1cdfed6926a7c9c43875510716f14ecf8" exitCode=0 Jan 31 04:11:39 crc kubenswrapper[4667]: I0131 04:11:39.809690 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-phkrx" event={"ID":"a1a9dfcd-65ab-4c1e-af87-477f98da17cb","Type":"ContainerDied","Data":"fa317bd5ee64035f868efc45bd20f6e1cdfed6926a7c9c43875510716f14ecf8"} Jan 31 04:11:39 crc kubenswrapper[4667]: I0131 04:11:39.871549 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-78789d8f44-5trmc"] Jan 31 04:11:39 crc kubenswrapper[4667]: I0131 04:11:39.893863 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-78789d8f44-5trmc"] Jan 31 04:11:39 crc kubenswrapper[4667]: I0131 04:11:39.958124 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 31 04:11:40 crc kubenswrapper[4667]: I0131 04:11:40.037330 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 31 04:11:40 crc kubenswrapper[4667]: I0131 04:11:40.037704 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 31 04:11:40 crc kubenswrapper[4667]: I0131 04:11:40.064182 4667 scope.go:117] "RemoveContainer" containerID="ee19d508369900e40e0fadf4e91e4fb079d0e2cfa86f8523c3b3d7785c2c9dab" Jan 31 04:11:40 crc kubenswrapper[4667]: I0131 04:11:40.117487 4667 scope.go:117] "RemoveContainer" containerID="d09258b8f1e4532ac1a5b7d64767a2c240653c4e63ff54849c8729b313b3f8c4" Jan 31 04:11:40 crc kubenswrapper[4667]: E0131 04:11:40.118017 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d09258b8f1e4532ac1a5b7d64767a2c240653c4e63ff54849c8729b313b3f8c4\": container with ID starting with d09258b8f1e4532ac1a5b7d64767a2c240653c4e63ff54849c8729b313b3f8c4 not found: ID does not exist" containerID="d09258b8f1e4532ac1a5b7d64767a2c240653c4e63ff54849c8729b313b3f8c4" Jan 31 04:11:40 crc kubenswrapper[4667]: I0131 04:11:40.118093 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d09258b8f1e4532ac1a5b7d64767a2c240653c4e63ff54849c8729b313b3f8c4"} err="failed to get container status \"d09258b8f1e4532ac1a5b7d64767a2c240653c4e63ff54849c8729b313b3f8c4\": rpc error: code = NotFound desc = could not find container \"d09258b8f1e4532ac1a5b7d64767a2c240653c4e63ff54849c8729b313b3f8c4\": container with ID starting with d09258b8f1e4532ac1a5b7d64767a2c240653c4e63ff54849c8729b313b3f8c4 not found: ID does not exist" Jan 31 04:11:40 crc kubenswrapper[4667]: I0131 04:11:40.118126 4667 scope.go:117] "RemoveContainer" containerID="ee19d508369900e40e0fadf4e91e4fb079d0e2cfa86f8523c3b3d7785c2c9dab" Jan 31 04:11:40 crc kubenswrapper[4667]: E0131 04:11:40.118441 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee19d508369900e40e0fadf4e91e4fb079d0e2cfa86f8523c3b3d7785c2c9dab\": container with ID starting with ee19d508369900e40e0fadf4e91e4fb079d0e2cfa86f8523c3b3d7785c2c9dab not found: ID does not exist" containerID="ee19d508369900e40e0fadf4e91e4fb079d0e2cfa86f8523c3b3d7785c2c9dab" Jan 31 04:11:40 crc kubenswrapper[4667]: I0131 04:11:40.118483 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee19d508369900e40e0fadf4e91e4fb079d0e2cfa86f8523c3b3d7785c2c9dab"} err="failed to get container status \"ee19d508369900e40e0fadf4e91e4fb079d0e2cfa86f8523c3b3d7785c2c9dab\": rpc error: code = NotFound desc = could not find container \"ee19d508369900e40e0fadf4e91e4fb079d0e2cfa86f8523c3b3d7785c2c9dab\": container with ID starting with ee19d508369900e40e0fadf4e91e4fb079d0e2cfa86f8523c3b3d7785c2c9dab not found: ID does not exist" Jan 31 04:11:40 crc kubenswrapper[4667]: I0131 04:11:40.821538 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-phkrx" event={"ID":"a1a9dfcd-65ab-4c1e-af87-477f98da17cb","Type":"ContainerStarted","Data":"3aea1f2fd52df559534b0bab8caa8fe7dbd7602201b4ef49349793b962489a92"} Jan 31 04:11:40 crc kubenswrapper[4667]: I0131 04:11:40.832360 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 31 04:11:41 crc kubenswrapper[4667]: I0131 04:11:41.358274 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7f8fd18-06a0-432e-8c17-c9b432b6ca69" path="/var/lib/kubelet/pods/b7f8fd18-06a0-432e-8c17-c9b432b6ca69/volumes" Jan 31 04:11:45 crc kubenswrapper[4667]: I0131 04:11:45.705466 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:11:45 crc kubenswrapper[4667]: I0131 04:11:45.706229 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:11:47 crc kubenswrapper[4667]: I0131 04:11:47.921258 4667 generic.go:334] "Generic (PLEG): container finished" podID="a1a9dfcd-65ab-4c1e-af87-477f98da17cb" containerID="3aea1f2fd52df559534b0bab8caa8fe7dbd7602201b4ef49349793b962489a92" exitCode=0 Jan 31 04:11:47 crc kubenswrapper[4667]: I0131 04:11:47.921511 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-phkrx" event={"ID":"a1a9dfcd-65ab-4c1e-af87-477f98da17cb","Type":"ContainerDied","Data":"3aea1f2fd52df559534b0bab8caa8fe7dbd7602201b4ef49349793b962489a92"} Jan 31 04:11:48 crc kubenswrapper[4667]: I0131 04:11:48.618981 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 04:11:48 crc kubenswrapper[4667]: I0131 04:11:48.936172 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-phkrx" event={"ID":"a1a9dfcd-65ab-4c1e-af87-477f98da17cb","Type":"ContainerStarted","Data":"b080c71dd8a3c9123d9859e2e5c028b9a3495dbe0d2682dc29aedbb2553d45ee"} Jan 31 04:11:48 crc kubenswrapper[4667]: I0131 04:11:48.965161 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-phkrx" podStartSLOduration=3.421512526 podStartE2EDuration="11.965131676s" podCreationTimestamp="2026-01-31 04:11:37 +0000 UTC" firstStartedPulling="2026-01-31 04:11:39.811464254 +0000 UTC m=+1423.327799553" lastFinishedPulling="2026-01-31 04:11:48.355083404 +0000 UTC m=+1431.871418703" observedRunningTime="2026-01-31 04:11:48.954463624 +0000 UTC m=+1432.470798923" watchObservedRunningTime="2026-01-31 04:11:48.965131676 +0000 UTC m=+1432.481466975" Jan 31 04:11:49 crc kubenswrapper[4667]: I0131 04:11:49.654489 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 04:11:54 crc kubenswrapper[4667]: I0131 04:11:54.713210 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="bf3f1a21-51b1-4282-99e5-ab52084984c0" containerName="rabbitmq" containerID="cri-o://901395cdc9ecff56b45d8817501b39d6e33ec130717eea1fff83080f85b4220b" gracePeriod=604794 Jan 31 04:11:55 crc kubenswrapper[4667]: I0131 04:11:55.320482 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="9265013e-d7ee-49cf-a5d8-c2f80066f459" containerName="rabbitmq" containerID="cri-o://12ceeaf74fd60367ec42bafef6caa1f099c65b5f4b4c7cd6428da69cd8b65718" gracePeriod=604795 Jan 31 04:11:58 crc kubenswrapper[4667]: I0131 04:11:58.056939 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-phkrx" Jan 31 04:11:58 crc kubenswrapper[4667]: I0131 04:11:58.057571 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-phkrx" Jan 31 04:11:59 crc kubenswrapper[4667]: I0131 04:11:59.106114 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-phkrx" podUID="a1a9dfcd-65ab-4c1e-af87-477f98da17cb" containerName="registry-server" probeResult="failure" output=< Jan 31 04:11:59 crc kubenswrapper[4667]: timeout: failed to connect service ":50051" within 1s Jan 31 04:11:59 crc kubenswrapper[4667]: > Jan 31 04:12:01 crc kubenswrapper[4667]: I0131 04:12:01.099483 4667 generic.go:334] "Generic (PLEG): container finished" podID="bf3f1a21-51b1-4282-99e5-ab52084984c0" containerID="901395cdc9ecff56b45d8817501b39d6e33ec130717eea1fff83080f85b4220b" exitCode=0 Jan 31 04:12:01 crc kubenswrapper[4667]: I0131 04:12:01.100314 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bf3f1a21-51b1-4282-99e5-ab52084984c0","Type":"ContainerDied","Data":"901395cdc9ecff56b45d8817501b39d6e33ec130717eea1fff83080f85b4220b"} Jan 31 04:12:01 crc kubenswrapper[4667]: I0131 04:12:01.286257 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 04:12:01 crc kubenswrapper[4667]: I0131 04:12:01.357027 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bf3f1a21-51b1-4282-99e5-ab52084984c0-rabbitmq-erlang-cookie\") pod \"bf3f1a21-51b1-4282-99e5-ab52084984c0\" (UID: \"bf3f1a21-51b1-4282-99e5-ab52084984c0\") " Jan 31 04:12:01 crc kubenswrapper[4667]: I0131 04:12:01.357173 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bf3f1a21-51b1-4282-99e5-ab52084984c0-config-data\") pod \"bf3f1a21-51b1-4282-99e5-ab52084984c0\" (UID: \"bf3f1a21-51b1-4282-99e5-ab52084984c0\") " Jan 31 04:12:01 crc kubenswrapper[4667]: I0131 04:12:01.357206 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bf3f1a21-51b1-4282-99e5-ab52084984c0-rabbitmq-tls\") pod \"bf3f1a21-51b1-4282-99e5-ab52084984c0\" (UID: \"bf3f1a21-51b1-4282-99e5-ab52084984c0\") " Jan 31 04:12:01 crc kubenswrapper[4667]: I0131 04:12:01.357252 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bf3f1a21-51b1-4282-99e5-ab52084984c0-server-conf\") pod \"bf3f1a21-51b1-4282-99e5-ab52084984c0\" (UID: \"bf3f1a21-51b1-4282-99e5-ab52084984c0\") " Jan 31 04:12:01 crc kubenswrapper[4667]: I0131 04:12:01.357340 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48xpm\" (UniqueName: \"kubernetes.io/projected/bf3f1a21-51b1-4282-99e5-ab52084984c0-kube-api-access-48xpm\") pod \"bf3f1a21-51b1-4282-99e5-ab52084984c0\" (UID: \"bf3f1a21-51b1-4282-99e5-ab52084984c0\") " Jan 31 04:12:01 crc kubenswrapper[4667]: I0131 04:12:01.357381 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bf3f1a21-51b1-4282-99e5-ab52084984c0-plugins-conf\") pod \"bf3f1a21-51b1-4282-99e5-ab52084984c0\" (UID: \"bf3f1a21-51b1-4282-99e5-ab52084984c0\") " Jan 31 04:12:01 crc kubenswrapper[4667]: I0131 04:12:01.357434 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"bf3f1a21-51b1-4282-99e5-ab52084984c0\" (UID: \"bf3f1a21-51b1-4282-99e5-ab52084984c0\") " Jan 31 04:12:01 crc kubenswrapper[4667]: I0131 04:12:01.357477 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bf3f1a21-51b1-4282-99e5-ab52084984c0-erlang-cookie-secret\") pod \"bf3f1a21-51b1-4282-99e5-ab52084984c0\" (UID: \"bf3f1a21-51b1-4282-99e5-ab52084984c0\") " Jan 31 04:12:01 crc kubenswrapper[4667]: I0131 04:12:01.357567 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bf3f1a21-51b1-4282-99e5-ab52084984c0-rabbitmq-confd\") pod \"bf3f1a21-51b1-4282-99e5-ab52084984c0\" (UID: \"bf3f1a21-51b1-4282-99e5-ab52084984c0\") " Jan 31 04:12:01 crc kubenswrapper[4667]: I0131 04:12:01.357625 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bf3f1a21-51b1-4282-99e5-ab52084984c0-pod-info\") pod \"bf3f1a21-51b1-4282-99e5-ab52084984c0\" (UID: \"bf3f1a21-51b1-4282-99e5-ab52084984c0\") " Jan 31 04:12:01 crc kubenswrapper[4667]: I0131 04:12:01.357661 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bf3f1a21-51b1-4282-99e5-ab52084984c0-rabbitmq-plugins\") pod \"bf3f1a21-51b1-4282-99e5-ab52084984c0\" (UID: \"bf3f1a21-51b1-4282-99e5-ab52084984c0\") " Jan 31 04:12:01 crc kubenswrapper[4667]: I0131 04:12:01.361183 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf3f1a21-51b1-4282-99e5-ab52084984c0-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "bf3f1a21-51b1-4282-99e5-ab52084984c0" (UID: "bf3f1a21-51b1-4282-99e5-ab52084984c0"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:12:01 crc kubenswrapper[4667]: I0131 04:12:01.363427 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf3f1a21-51b1-4282-99e5-ab52084984c0-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "bf3f1a21-51b1-4282-99e5-ab52084984c0" (UID: "bf3f1a21-51b1-4282-99e5-ab52084984c0"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:12:01 crc kubenswrapper[4667]: I0131 04:12:01.375459 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf3f1a21-51b1-4282-99e5-ab52084984c0-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "bf3f1a21-51b1-4282-99e5-ab52084984c0" (UID: "bf3f1a21-51b1-4282-99e5-ab52084984c0"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:12:01 crc kubenswrapper[4667]: I0131 04:12:01.376769 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf3f1a21-51b1-4282-99e5-ab52084984c0-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "bf3f1a21-51b1-4282-99e5-ab52084984c0" (UID: "bf3f1a21-51b1-4282-99e5-ab52084984c0"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:12:01 crc kubenswrapper[4667]: I0131 04:12:01.407025 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf3f1a21-51b1-4282-99e5-ab52084984c0-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "bf3f1a21-51b1-4282-99e5-ab52084984c0" (UID: "bf3f1a21-51b1-4282-99e5-ab52084984c0"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:12:01 crc kubenswrapper[4667]: I0131 04:12:01.407869 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/bf3f1a21-51b1-4282-99e5-ab52084984c0-pod-info" (OuterVolumeSpecName: "pod-info") pod "bf3f1a21-51b1-4282-99e5-ab52084984c0" (UID: "bf3f1a21-51b1-4282-99e5-ab52084984c0"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 31 04:12:01 crc kubenswrapper[4667]: I0131 04:12:01.409673 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "bf3f1a21-51b1-4282-99e5-ab52084984c0" (UID: "bf3f1a21-51b1-4282-99e5-ab52084984c0"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 04:12:01 crc kubenswrapper[4667]: I0131 04:12:01.454621 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf3f1a21-51b1-4282-99e5-ab52084984c0-kube-api-access-48xpm" (OuterVolumeSpecName: "kube-api-access-48xpm") pod "bf3f1a21-51b1-4282-99e5-ab52084984c0" (UID: "bf3f1a21-51b1-4282-99e5-ab52084984c0"). InnerVolumeSpecName "kube-api-access-48xpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:12:01 crc kubenswrapper[4667]: I0131 04:12:01.462794 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48xpm\" (UniqueName: \"kubernetes.io/projected/bf3f1a21-51b1-4282-99e5-ab52084984c0-kube-api-access-48xpm\") on node \"crc\" DevicePath \"\"" Jan 31 04:12:01 crc kubenswrapper[4667]: I0131 04:12:01.462831 4667 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bf3f1a21-51b1-4282-99e5-ab52084984c0-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 31 04:12:01 crc kubenswrapper[4667]: I0131 04:12:01.462877 4667 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 31 04:12:01 crc kubenswrapper[4667]: I0131 04:12:01.462887 4667 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bf3f1a21-51b1-4282-99e5-ab52084984c0-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 31 04:12:01 crc kubenswrapper[4667]: I0131 04:12:01.462897 4667 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bf3f1a21-51b1-4282-99e5-ab52084984c0-pod-info\") on node \"crc\" DevicePath \"\"" Jan 31 04:12:01 crc kubenswrapper[4667]: I0131 04:12:01.462906 4667 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bf3f1a21-51b1-4282-99e5-ab52084984c0-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 31 04:12:01 crc kubenswrapper[4667]: I0131 04:12:01.462914 4667 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bf3f1a21-51b1-4282-99e5-ab52084984c0-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 31 04:12:01 crc kubenswrapper[4667]: I0131 04:12:01.462922 4667 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bf3f1a21-51b1-4282-99e5-ab52084984c0-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 31 04:12:01 crc kubenswrapper[4667]: I0131 04:12:01.466977 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf3f1a21-51b1-4282-99e5-ab52084984c0-config-data" (OuterVolumeSpecName: "config-data") pod "bf3f1a21-51b1-4282-99e5-ab52084984c0" (UID: "bf3f1a21-51b1-4282-99e5-ab52084984c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:12:01 crc kubenswrapper[4667]: I0131 04:12:01.532763 4667 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 31 04:12:01 crc kubenswrapper[4667]: I0131 04:12:01.539038 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="9265013e-d7ee-49cf-a5d8-c2f80066f459" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: connect: connection refused" Jan 31 04:12:01 crc kubenswrapper[4667]: I0131 04:12:01.552885 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf3f1a21-51b1-4282-99e5-ab52084984c0-server-conf" (OuterVolumeSpecName: "server-conf") pod "bf3f1a21-51b1-4282-99e5-ab52084984c0" (UID: "bf3f1a21-51b1-4282-99e5-ab52084984c0"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:12:01 crc kubenswrapper[4667]: I0131 04:12:01.566588 4667 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 31 04:12:01 crc kubenswrapper[4667]: I0131 04:12:01.566617 4667 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bf3f1a21-51b1-4282-99e5-ab52084984c0-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:12:01 crc kubenswrapper[4667]: I0131 04:12:01.566632 4667 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bf3f1a21-51b1-4282-99e5-ab52084984c0-server-conf\") on node \"crc\" DevicePath \"\"" Jan 31 04:12:01 crc kubenswrapper[4667]: I0131 04:12:01.725125 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf3f1a21-51b1-4282-99e5-ab52084984c0-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "bf3f1a21-51b1-4282-99e5-ab52084984c0" (UID: "bf3f1a21-51b1-4282-99e5-ab52084984c0"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:12:01 crc kubenswrapper[4667]: I0131 04:12:01.771986 4667 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bf3f1a21-51b1-4282-99e5-ab52084984c0-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.223019 4667 generic.go:334] "Generic (PLEG): container finished" podID="9265013e-d7ee-49cf-a5d8-c2f80066f459" containerID="12ceeaf74fd60367ec42bafef6caa1f099c65b5f4b4c7cd6428da69cd8b65718" exitCode=0 Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.223474 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9265013e-d7ee-49cf-a5d8-c2f80066f459","Type":"ContainerDied","Data":"12ceeaf74fd60367ec42bafef6caa1f099c65b5f4b4c7cd6428da69cd8b65718"} Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.232573 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bf3f1a21-51b1-4282-99e5-ab52084984c0","Type":"ContainerDied","Data":"25e1ef4e3e1310cb5be651e126eabf22f6d35bc656a3d48369cbe95d6a81209a"} Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.232638 4667 scope.go:117] "RemoveContainer" containerID="901395cdc9ecff56b45d8817501b39d6e33ec130717eea1fff83080f85b4220b" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.232813 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.335069 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.335408 4667 scope.go:117] "RemoveContainer" containerID="4efc4bdb3236480020f2071f80c15b872eb6c2e4c82ea5836e3deb47c3e785a5" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.337083 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.362370 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.383766 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 04:12:02 crc kubenswrapper[4667]: E0131 04:12:02.384383 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7f8fd18-06a0-432e-8c17-c9b432b6ca69" containerName="horizon" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.384404 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7f8fd18-06a0-432e-8c17-c9b432b6ca69" containerName="horizon" Jan 31 04:12:02 crc kubenswrapper[4667]: E0131 04:12:02.384412 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf3f1a21-51b1-4282-99e5-ab52084984c0" containerName="setup-container" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.384420 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf3f1a21-51b1-4282-99e5-ab52084984c0" containerName="setup-container" Jan 31 04:12:02 crc kubenswrapper[4667]: E0131 04:12:02.384440 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7f8fd18-06a0-432e-8c17-c9b432b6ca69" containerName="horizon" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.384447 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7f8fd18-06a0-432e-8c17-c9b432b6ca69" containerName="horizon" Jan 31 04:12:02 crc kubenswrapper[4667]: E0131 04:12:02.384457 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9265013e-d7ee-49cf-a5d8-c2f80066f459" containerName="rabbitmq" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.384465 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="9265013e-d7ee-49cf-a5d8-c2f80066f459" containerName="rabbitmq" Jan 31 04:12:02 crc kubenswrapper[4667]: E0131 04:12:02.384479 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7f8fd18-06a0-432e-8c17-c9b432b6ca69" containerName="horizon-log" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.384485 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7f8fd18-06a0-432e-8c17-c9b432b6ca69" containerName="horizon-log" Jan 31 04:12:02 crc kubenswrapper[4667]: E0131 04:12:02.384498 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9265013e-d7ee-49cf-a5d8-c2f80066f459" containerName="setup-container" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.384503 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="9265013e-d7ee-49cf-a5d8-c2f80066f459" containerName="setup-container" Jan 31 04:12:02 crc kubenswrapper[4667]: E0131 04:12:02.384513 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7f8fd18-06a0-432e-8c17-c9b432b6ca69" containerName="horizon" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.384520 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7f8fd18-06a0-432e-8c17-c9b432b6ca69" containerName="horizon" Jan 31 04:12:02 crc kubenswrapper[4667]: E0131 04:12:02.384540 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf3f1a21-51b1-4282-99e5-ab52084984c0" containerName="rabbitmq" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.384546 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf3f1a21-51b1-4282-99e5-ab52084984c0" containerName="rabbitmq" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.384725 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf3f1a21-51b1-4282-99e5-ab52084984c0" containerName="rabbitmq" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.384737 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7f8fd18-06a0-432e-8c17-c9b432b6ca69" containerName="horizon" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.384750 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7f8fd18-06a0-432e-8c17-c9b432b6ca69" containerName="horizon" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.384757 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="9265013e-d7ee-49cf-a5d8-c2f80066f459" containerName="rabbitmq" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.384763 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7f8fd18-06a0-432e-8c17-c9b432b6ca69" containerName="horizon" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.384773 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7f8fd18-06a0-432e-8c17-c9b432b6ca69" containerName="horizon-log" Jan 31 04:12:02 crc kubenswrapper[4667]: E0131 04:12:02.384991 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7f8fd18-06a0-432e-8c17-c9b432b6ca69" containerName="horizon" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.385003 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7f8fd18-06a0-432e-8c17-c9b432b6ca69" containerName="horizon" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.385225 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7f8fd18-06a0-432e-8c17-c9b432b6ca69" containerName="horizon" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.386083 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.390951 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.391116 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.391223 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.391381 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-8hd55" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.392098 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.392333 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.392470 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.420175 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.527669 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9265013e-d7ee-49cf-a5d8-c2f80066f459-erlang-cookie-secret\") pod \"9265013e-d7ee-49cf-a5d8-c2f80066f459\" (UID: \"9265013e-d7ee-49cf-a5d8-c2f80066f459\") " Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.527792 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9265013e-d7ee-49cf-a5d8-c2f80066f459-rabbitmq-erlang-cookie\") pod \"9265013e-d7ee-49cf-a5d8-c2f80066f459\" (UID: \"9265013e-d7ee-49cf-a5d8-c2f80066f459\") " Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.527825 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9265013e-d7ee-49cf-a5d8-c2f80066f459-plugins-conf\") pod \"9265013e-d7ee-49cf-a5d8-c2f80066f459\" (UID: \"9265013e-d7ee-49cf-a5d8-c2f80066f459\") " Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.527907 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9265013e-d7ee-49cf-a5d8-c2f80066f459-pod-info\") pod \"9265013e-d7ee-49cf-a5d8-c2f80066f459\" (UID: \"9265013e-d7ee-49cf-a5d8-c2f80066f459\") " Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.527999 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9265013e-d7ee-49cf-a5d8-c2f80066f459-server-conf\") pod \"9265013e-d7ee-49cf-a5d8-c2f80066f459\" (UID: \"9265013e-d7ee-49cf-a5d8-c2f80066f459\") " Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.528019 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9265013e-d7ee-49cf-a5d8-c2f80066f459-rabbitmq-confd\") pod \"9265013e-d7ee-49cf-a5d8-c2f80066f459\" (UID: \"9265013e-d7ee-49cf-a5d8-c2f80066f459\") " Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.528051 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9265013e-d7ee-49cf-a5d8-c2f80066f459-config-data\") pod \"9265013e-d7ee-49cf-a5d8-c2f80066f459\" (UID: \"9265013e-d7ee-49cf-a5d8-c2f80066f459\") " Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.528077 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"9265013e-d7ee-49cf-a5d8-c2f80066f459\" (UID: \"9265013e-d7ee-49cf-a5d8-c2f80066f459\") " Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.528173 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9265013e-d7ee-49cf-a5d8-c2f80066f459-rabbitmq-plugins\") pod \"9265013e-d7ee-49cf-a5d8-c2f80066f459\" (UID: \"9265013e-d7ee-49cf-a5d8-c2f80066f459\") " Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.528219 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9265013e-d7ee-49cf-a5d8-c2f80066f459-rabbitmq-tls\") pod \"9265013e-d7ee-49cf-a5d8-c2f80066f459\" (UID: \"9265013e-d7ee-49cf-a5d8-c2f80066f459\") " Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.528367 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8wcq\" (UniqueName: \"kubernetes.io/projected/9265013e-d7ee-49cf-a5d8-c2f80066f459-kube-api-access-q8wcq\") pod \"9265013e-d7ee-49cf-a5d8-c2f80066f459\" (UID: \"9265013e-d7ee-49cf-a5d8-c2f80066f459\") " Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.528926 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/aca13392-5591-4b68-9948-c5e5fe558803-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"aca13392-5591-4b68-9948-c5e5fe558803\") " pod="openstack/rabbitmq-server-0" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.529002 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/aca13392-5591-4b68-9948-c5e5fe558803-pod-info\") pod \"rabbitmq-server-0\" (UID: \"aca13392-5591-4b68-9948-c5e5fe558803\") " pod="openstack/rabbitmq-server-0" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.529117 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/aca13392-5591-4b68-9948-c5e5fe558803-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"aca13392-5591-4b68-9948-c5e5fe558803\") " pod="openstack/rabbitmq-server-0" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.529153 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aca13392-5591-4b68-9948-c5e5fe558803-config-data\") pod \"rabbitmq-server-0\" (UID: \"aca13392-5591-4b68-9948-c5e5fe558803\") " pod="openstack/rabbitmq-server-0" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.529211 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/aca13392-5591-4b68-9948-c5e5fe558803-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"aca13392-5591-4b68-9948-c5e5fe558803\") " pod="openstack/rabbitmq-server-0" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.529243 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf9zn\" (UniqueName: \"kubernetes.io/projected/aca13392-5591-4b68-9948-c5e5fe558803-kube-api-access-rf9zn\") pod \"rabbitmq-server-0\" (UID: \"aca13392-5591-4b68-9948-c5e5fe558803\") " pod="openstack/rabbitmq-server-0" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.529317 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/aca13392-5591-4b68-9948-c5e5fe558803-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"aca13392-5591-4b68-9948-c5e5fe558803\") " pod="openstack/rabbitmq-server-0" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.529343 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9265013e-d7ee-49cf-a5d8-c2f80066f459-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "9265013e-d7ee-49cf-a5d8-c2f80066f459" (UID: "9265013e-d7ee-49cf-a5d8-c2f80066f459"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.529359 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/aca13392-5591-4b68-9948-c5e5fe558803-server-conf\") pod \"rabbitmq-server-0\" (UID: \"aca13392-5591-4b68-9948-c5e5fe558803\") " pod="openstack/rabbitmq-server-0" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.529385 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"aca13392-5591-4b68-9948-c5e5fe558803\") " pod="openstack/rabbitmq-server-0" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.529441 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/aca13392-5591-4b68-9948-c5e5fe558803-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"aca13392-5591-4b68-9948-c5e5fe558803\") " pod="openstack/rabbitmq-server-0" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.529463 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/aca13392-5591-4b68-9948-c5e5fe558803-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"aca13392-5591-4b68-9948-c5e5fe558803\") " pod="openstack/rabbitmq-server-0" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.529560 4667 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9265013e-d7ee-49cf-a5d8-c2f80066f459-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.529758 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9265013e-d7ee-49cf-a5d8-c2f80066f459-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "9265013e-d7ee-49cf-a5d8-c2f80066f459" (UID: "9265013e-d7ee-49cf-a5d8-c2f80066f459"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.530828 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9265013e-d7ee-49cf-a5d8-c2f80066f459-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "9265013e-d7ee-49cf-a5d8-c2f80066f459" (UID: "9265013e-d7ee-49cf-a5d8-c2f80066f459"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.538295 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9265013e-d7ee-49cf-a5d8-c2f80066f459-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "9265013e-d7ee-49cf-a5d8-c2f80066f459" (UID: "9265013e-d7ee-49cf-a5d8-c2f80066f459"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.541965 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "persistence") pod "9265013e-d7ee-49cf-a5d8-c2f80066f459" (UID: "9265013e-d7ee-49cf-a5d8-c2f80066f459"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.542358 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/9265013e-d7ee-49cf-a5d8-c2f80066f459-pod-info" (OuterVolumeSpecName: "pod-info") pod "9265013e-d7ee-49cf-a5d8-c2f80066f459" (UID: "9265013e-d7ee-49cf-a5d8-c2f80066f459"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.559765 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9265013e-d7ee-49cf-a5d8-c2f80066f459-kube-api-access-q8wcq" (OuterVolumeSpecName: "kube-api-access-q8wcq") pod "9265013e-d7ee-49cf-a5d8-c2f80066f459" (UID: "9265013e-d7ee-49cf-a5d8-c2f80066f459"). InnerVolumeSpecName "kube-api-access-q8wcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.563250 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9265013e-d7ee-49cf-a5d8-c2f80066f459-config-data" (OuterVolumeSpecName: "config-data") pod "9265013e-d7ee-49cf-a5d8-c2f80066f459" (UID: "9265013e-d7ee-49cf-a5d8-c2f80066f459"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.569469 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9265013e-d7ee-49cf-a5d8-c2f80066f459-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "9265013e-d7ee-49cf-a5d8-c2f80066f459" (UID: "9265013e-d7ee-49cf-a5d8-c2f80066f459"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.603930 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9265013e-d7ee-49cf-a5d8-c2f80066f459-server-conf" (OuterVolumeSpecName: "server-conf") pod "9265013e-d7ee-49cf-a5d8-c2f80066f459" (UID: "9265013e-d7ee-49cf-a5d8-c2f80066f459"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.631318 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/aca13392-5591-4b68-9948-c5e5fe558803-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"aca13392-5591-4b68-9948-c5e5fe558803\") " pod="openstack/rabbitmq-server-0" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.631992 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aca13392-5591-4b68-9948-c5e5fe558803-config-data\") pod \"rabbitmq-server-0\" (UID: \"aca13392-5591-4b68-9948-c5e5fe558803\") " pod="openstack/rabbitmq-server-0" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.632091 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/aca13392-5591-4b68-9948-c5e5fe558803-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"aca13392-5591-4b68-9948-c5e5fe558803\") " pod="openstack/rabbitmq-server-0" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.632257 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf9zn\" (UniqueName: \"kubernetes.io/projected/aca13392-5591-4b68-9948-c5e5fe558803-kube-api-access-rf9zn\") pod \"rabbitmq-server-0\" (UID: \"aca13392-5591-4b68-9948-c5e5fe558803\") " pod="openstack/rabbitmq-server-0" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.632350 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/aca13392-5591-4b68-9948-c5e5fe558803-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"aca13392-5591-4b68-9948-c5e5fe558803\") " pod="openstack/rabbitmq-server-0" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.632416 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/aca13392-5591-4b68-9948-c5e5fe558803-server-conf\") pod \"rabbitmq-server-0\" (UID: \"aca13392-5591-4b68-9948-c5e5fe558803\") " pod="openstack/rabbitmq-server-0" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.632486 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"aca13392-5591-4b68-9948-c5e5fe558803\") " pod="openstack/rabbitmq-server-0" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.632556 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/aca13392-5591-4b68-9948-c5e5fe558803-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"aca13392-5591-4b68-9948-c5e5fe558803\") " pod="openstack/rabbitmq-server-0" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.632629 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/aca13392-5591-4b68-9948-c5e5fe558803-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"aca13392-5591-4b68-9948-c5e5fe558803\") " pod="openstack/rabbitmq-server-0" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.632742 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/aca13392-5591-4b68-9948-c5e5fe558803-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"aca13392-5591-4b68-9948-c5e5fe558803\") " pod="openstack/rabbitmq-server-0" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.632825 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/aca13392-5591-4b68-9948-c5e5fe558803-pod-info\") pod \"rabbitmq-server-0\" (UID: \"aca13392-5591-4b68-9948-c5e5fe558803\") " pod="openstack/rabbitmq-server-0" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.633037 4667 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9265013e-d7ee-49cf-a5d8-c2f80066f459-server-conf\") on node \"crc\" DevicePath \"\"" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.633095 4667 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9265013e-d7ee-49cf-a5d8-c2f80066f459-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.633268 4667 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.633322 4667 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9265013e-d7ee-49cf-a5d8-c2f80066f459-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.633375 4667 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9265013e-d7ee-49cf-a5d8-c2f80066f459-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.633424 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8wcq\" (UniqueName: \"kubernetes.io/projected/9265013e-d7ee-49cf-a5d8-c2f80066f459-kube-api-access-q8wcq\") on node \"crc\" DevicePath \"\"" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.633473 4667 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9265013e-d7ee-49cf-a5d8-c2f80066f459-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.633532 4667 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9265013e-d7ee-49cf-a5d8-c2f80066f459-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.633582 4667 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9265013e-d7ee-49cf-a5d8-c2f80066f459-pod-info\") on node \"crc\" DevicePath \"\"" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.633746 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/aca13392-5591-4b68-9948-c5e5fe558803-server-conf\") pod \"rabbitmq-server-0\" (UID: \"aca13392-5591-4b68-9948-c5e5fe558803\") " pod="openstack/rabbitmq-server-0" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.633954 4667 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"aca13392-5591-4b68-9948-c5e5fe558803\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.634143 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/aca13392-5591-4b68-9948-c5e5fe558803-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"aca13392-5591-4b68-9948-c5e5fe558803\") " pod="openstack/rabbitmq-server-0" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.633095 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aca13392-5591-4b68-9948-c5e5fe558803-config-data\") pod \"rabbitmq-server-0\" (UID: \"aca13392-5591-4b68-9948-c5e5fe558803\") " pod="openstack/rabbitmq-server-0" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.632620 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/aca13392-5591-4b68-9948-c5e5fe558803-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"aca13392-5591-4b68-9948-c5e5fe558803\") " pod="openstack/rabbitmq-server-0" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.637990 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/aca13392-5591-4b68-9948-c5e5fe558803-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"aca13392-5591-4b68-9948-c5e5fe558803\") " pod="openstack/rabbitmq-server-0" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.639422 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/aca13392-5591-4b68-9948-c5e5fe558803-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"aca13392-5591-4b68-9948-c5e5fe558803\") " pod="openstack/rabbitmq-server-0" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.645862 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/aca13392-5591-4b68-9948-c5e5fe558803-pod-info\") pod \"rabbitmq-server-0\" (UID: \"aca13392-5591-4b68-9948-c5e5fe558803\") " pod="openstack/rabbitmq-server-0" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.654914 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/aca13392-5591-4b68-9948-c5e5fe558803-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"aca13392-5591-4b68-9948-c5e5fe558803\") " pod="openstack/rabbitmq-server-0" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.657684 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/aca13392-5591-4b68-9948-c5e5fe558803-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"aca13392-5591-4b68-9948-c5e5fe558803\") " pod="openstack/rabbitmq-server-0" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.665645 4667 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.673724 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"aca13392-5591-4b68-9948-c5e5fe558803\") " pod="openstack/rabbitmq-server-0" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.673753 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf9zn\" (UniqueName: \"kubernetes.io/projected/aca13392-5591-4b68-9948-c5e5fe558803-kube-api-access-rf9zn\") pod \"rabbitmq-server-0\" (UID: \"aca13392-5591-4b68-9948-c5e5fe558803\") " pod="openstack/rabbitmq-server-0" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.732900 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.736036 4667 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.772098 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9265013e-d7ee-49cf-a5d8-c2f80066f459-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "9265013e-d7ee-49cf-a5d8-c2f80066f459" (UID: "9265013e-d7ee-49cf-a5d8-c2f80066f459"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:12:02 crc kubenswrapper[4667]: I0131 04:12:02.837668 4667 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9265013e-d7ee-49cf-a5d8-c2f80066f459-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.242888 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9265013e-d7ee-49cf-a5d8-c2f80066f459","Type":"ContainerDied","Data":"3e792268fbb8001fb96f9c2e1920528f28aa7968bc06baab5559142c3b8b94d9"} Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.243322 4667 scope.go:117] "RemoveContainer" containerID="12ceeaf74fd60367ec42bafef6caa1f099c65b5f4b4c7cd6428da69cd8b65718" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.243447 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.323354 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf3f1a21-51b1-4282-99e5-ab52084984c0" path="/var/lib/kubelet/pods/bf3f1a21-51b1-4282-99e5-ab52084984c0/volumes" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.324317 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.324348 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.332095 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.334139 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.334470 4667 scope.go:117] "RemoveContainer" containerID="4acd211c95f8f9b2d57a089d9d7532f112c96d9e87ccf2175d7359401f40ac7e" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.338792 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.339060 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.339165 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.339271 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.339406 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.339617 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-p77j2" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.339723 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.360528 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.370453 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.465291 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/acadb76e-2e9d-4af4-a5d1-fb5f28b006c6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"acadb76e-2e9d-4af4-a5d1-fb5f28b006c6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.465401 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/acadb76e-2e9d-4af4-a5d1-fb5f28b006c6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"acadb76e-2e9d-4af4-a5d1-fb5f28b006c6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.465424 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/acadb76e-2e9d-4af4-a5d1-fb5f28b006c6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"acadb76e-2e9d-4af4-a5d1-fb5f28b006c6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.465458 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/acadb76e-2e9d-4af4-a5d1-fb5f28b006c6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"acadb76e-2e9d-4af4-a5d1-fb5f28b006c6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.465530 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/acadb76e-2e9d-4af4-a5d1-fb5f28b006c6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"acadb76e-2e9d-4af4-a5d1-fb5f28b006c6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.465581 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9vlk\" (UniqueName: \"kubernetes.io/projected/acadb76e-2e9d-4af4-a5d1-fb5f28b006c6-kube-api-access-v9vlk\") pod \"rabbitmq-cell1-server-0\" (UID: \"acadb76e-2e9d-4af4-a5d1-fb5f28b006c6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.465631 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/acadb76e-2e9d-4af4-a5d1-fb5f28b006c6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"acadb76e-2e9d-4af4-a5d1-fb5f28b006c6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.465658 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/acadb76e-2e9d-4af4-a5d1-fb5f28b006c6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"acadb76e-2e9d-4af4-a5d1-fb5f28b006c6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.465717 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/acadb76e-2e9d-4af4-a5d1-fb5f28b006c6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"acadb76e-2e9d-4af4-a5d1-fb5f28b006c6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.465746 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"acadb76e-2e9d-4af4-a5d1-fb5f28b006c6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.465767 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/acadb76e-2e9d-4af4-a5d1-fb5f28b006c6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"acadb76e-2e9d-4af4-a5d1-fb5f28b006c6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.567737 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/acadb76e-2e9d-4af4-a5d1-fb5f28b006c6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"acadb76e-2e9d-4af4-a5d1-fb5f28b006c6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.568040 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/acadb76e-2e9d-4af4-a5d1-fb5f28b006c6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"acadb76e-2e9d-4af4-a5d1-fb5f28b006c6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.568174 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/acadb76e-2e9d-4af4-a5d1-fb5f28b006c6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"acadb76e-2e9d-4af4-a5d1-fb5f28b006c6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.568292 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/acadb76e-2e9d-4af4-a5d1-fb5f28b006c6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"acadb76e-2e9d-4af4-a5d1-fb5f28b006c6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.568442 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/acadb76e-2e9d-4af4-a5d1-fb5f28b006c6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"acadb76e-2e9d-4af4-a5d1-fb5f28b006c6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.568546 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9vlk\" (UniqueName: \"kubernetes.io/projected/acadb76e-2e9d-4af4-a5d1-fb5f28b006c6-kube-api-access-v9vlk\") pod \"rabbitmq-cell1-server-0\" (UID: \"acadb76e-2e9d-4af4-a5d1-fb5f28b006c6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.568667 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/acadb76e-2e9d-4af4-a5d1-fb5f28b006c6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"acadb76e-2e9d-4af4-a5d1-fb5f28b006c6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.570511 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/acadb76e-2e9d-4af4-a5d1-fb5f28b006c6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"acadb76e-2e9d-4af4-a5d1-fb5f28b006c6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.570679 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/acadb76e-2e9d-4af4-a5d1-fb5f28b006c6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"acadb76e-2e9d-4af4-a5d1-fb5f28b006c6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.570793 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"acadb76e-2e9d-4af4-a5d1-fb5f28b006c6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.570899 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/acadb76e-2e9d-4af4-a5d1-fb5f28b006c6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"acadb76e-2e9d-4af4-a5d1-fb5f28b006c6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.569379 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/acadb76e-2e9d-4af4-a5d1-fb5f28b006c6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"acadb76e-2e9d-4af4-a5d1-fb5f28b006c6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.571904 4667 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"acadb76e-2e9d-4af4-a5d1-fb5f28b006c6\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.570271 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/acadb76e-2e9d-4af4-a5d1-fb5f28b006c6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"acadb76e-2e9d-4af4-a5d1-fb5f28b006c6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.569960 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/acadb76e-2e9d-4af4-a5d1-fb5f28b006c6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"acadb76e-2e9d-4af4-a5d1-fb5f28b006c6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.570448 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/acadb76e-2e9d-4af4-a5d1-fb5f28b006c6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"acadb76e-2e9d-4af4-a5d1-fb5f28b006c6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.572090 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/acadb76e-2e9d-4af4-a5d1-fb5f28b006c6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"acadb76e-2e9d-4af4-a5d1-fb5f28b006c6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.575910 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/acadb76e-2e9d-4af4-a5d1-fb5f28b006c6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"acadb76e-2e9d-4af4-a5d1-fb5f28b006c6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.576293 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/acadb76e-2e9d-4af4-a5d1-fb5f28b006c6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"acadb76e-2e9d-4af4-a5d1-fb5f28b006c6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.582603 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/acadb76e-2e9d-4af4-a5d1-fb5f28b006c6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"acadb76e-2e9d-4af4-a5d1-fb5f28b006c6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.586075 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/acadb76e-2e9d-4af4-a5d1-fb5f28b006c6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"acadb76e-2e9d-4af4-a5d1-fb5f28b006c6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.605511 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9vlk\" (UniqueName: \"kubernetes.io/projected/acadb76e-2e9d-4af4-a5d1-fb5f28b006c6-kube-api-access-v9vlk\") pod \"rabbitmq-cell1-server-0\" (UID: \"acadb76e-2e9d-4af4-a5d1-fb5f28b006c6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.627928 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"acadb76e-2e9d-4af4-a5d1-fb5f28b006c6\") " pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:12:03 crc kubenswrapper[4667]: I0131 04:12:03.801159 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:12:04 crc kubenswrapper[4667]: I0131 04:12:04.269170 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"aca13392-5591-4b68-9948-c5e5fe558803","Type":"ContainerStarted","Data":"75e4766671e0faad2a4598d4b3d0c6c149375e4d42d961b5aa5de5fbd525bdf6"} Jan 31 04:12:04 crc kubenswrapper[4667]: I0131 04:12:04.311533 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 31 04:12:04 crc kubenswrapper[4667]: I0131 04:12:04.577377 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-7bvl6"] Jan 31 04:12:04 crc kubenswrapper[4667]: I0131 04:12:04.580737 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-7bvl6" Jan 31 04:12:04 crc kubenswrapper[4667]: I0131 04:12:04.589737 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 31 04:12:04 crc kubenswrapper[4667]: I0131 04:12:04.599643 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-7bvl6"] Jan 31 04:12:04 crc kubenswrapper[4667]: I0131 04:12:04.762030 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b971d20f-c021-4008-a666-1cfd5ea90764-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-7bvl6\" (UID: \"b971d20f-c021-4008-a666-1cfd5ea90764\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-7bvl6" Jan 31 04:12:04 crc kubenswrapper[4667]: I0131 04:12:04.762375 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b971d20f-c021-4008-a666-1cfd5ea90764-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-7bvl6\" (UID: \"b971d20f-c021-4008-a666-1cfd5ea90764\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-7bvl6" Jan 31 04:12:04 crc kubenswrapper[4667]: I0131 04:12:04.762515 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b971d20f-c021-4008-a666-1cfd5ea90764-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-7bvl6\" (UID: \"b971d20f-c021-4008-a666-1cfd5ea90764\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-7bvl6" Jan 31 04:12:04 crc kubenswrapper[4667]: I0131 04:12:04.762635 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b971d20f-c021-4008-a666-1cfd5ea90764-config\") pod \"dnsmasq-dns-79bd4cc8c9-7bvl6\" (UID: \"b971d20f-c021-4008-a666-1cfd5ea90764\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-7bvl6" Jan 31 04:12:04 crc kubenswrapper[4667]: I0131 04:12:04.762729 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b971d20f-c021-4008-a666-1cfd5ea90764-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-7bvl6\" (UID: \"b971d20f-c021-4008-a666-1cfd5ea90764\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-7bvl6" Jan 31 04:12:04 crc kubenswrapper[4667]: I0131 04:12:04.762863 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg9vw\" (UniqueName: \"kubernetes.io/projected/b971d20f-c021-4008-a666-1cfd5ea90764-kube-api-access-wg9vw\") pod \"dnsmasq-dns-79bd4cc8c9-7bvl6\" (UID: \"b971d20f-c021-4008-a666-1cfd5ea90764\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-7bvl6" Jan 31 04:12:04 crc kubenswrapper[4667]: I0131 04:12:04.762962 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b971d20f-c021-4008-a666-1cfd5ea90764-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-7bvl6\" (UID: \"b971d20f-c021-4008-a666-1cfd5ea90764\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-7bvl6" Jan 31 04:12:04 crc kubenswrapper[4667]: I0131 04:12:04.865035 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b971d20f-c021-4008-a666-1cfd5ea90764-config\") pod \"dnsmasq-dns-79bd4cc8c9-7bvl6\" (UID: \"b971d20f-c021-4008-a666-1cfd5ea90764\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-7bvl6" Jan 31 04:12:04 crc kubenswrapper[4667]: I0131 04:12:04.865099 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b971d20f-c021-4008-a666-1cfd5ea90764-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-7bvl6\" (UID: \"b971d20f-c021-4008-a666-1cfd5ea90764\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-7bvl6" Jan 31 04:12:04 crc kubenswrapper[4667]: I0131 04:12:04.865144 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg9vw\" (UniqueName: \"kubernetes.io/projected/b971d20f-c021-4008-a666-1cfd5ea90764-kube-api-access-wg9vw\") pod \"dnsmasq-dns-79bd4cc8c9-7bvl6\" (UID: \"b971d20f-c021-4008-a666-1cfd5ea90764\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-7bvl6" Jan 31 04:12:04 crc kubenswrapper[4667]: I0131 04:12:04.865179 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b971d20f-c021-4008-a666-1cfd5ea90764-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-7bvl6\" (UID: \"b971d20f-c021-4008-a666-1cfd5ea90764\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-7bvl6" Jan 31 04:12:04 crc kubenswrapper[4667]: I0131 04:12:04.865239 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b971d20f-c021-4008-a666-1cfd5ea90764-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-7bvl6\" (UID: \"b971d20f-c021-4008-a666-1cfd5ea90764\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-7bvl6" Jan 31 04:12:04 crc kubenswrapper[4667]: I0131 04:12:04.865272 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b971d20f-c021-4008-a666-1cfd5ea90764-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-7bvl6\" (UID: \"b971d20f-c021-4008-a666-1cfd5ea90764\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-7bvl6" Jan 31 04:12:04 crc kubenswrapper[4667]: I0131 04:12:04.865303 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b971d20f-c021-4008-a666-1cfd5ea90764-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-7bvl6\" (UID: \"b971d20f-c021-4008-a666-1cfd5ea90764\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-7bvl6" Jan 31 04:12:04 crc kubenswrapper[4667]: I0131 04:12:04.866828 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b971d20f-c021-4008-a666-1cfd5ea90764-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-7bvl6\" (UID: \"b971d20f-c021-4008-a666-1cfd5ea90764\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-7bvl6" Jan 31 04:12:04 crc kubenswrapper[4667]: I0131 04:12:04.866900 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b971d20f-c021-4008-a666-1cfd5ea90764-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-7bvl6\" (UID: \"b971d20f-c021-4008-a666-1cfd5ea90764\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-7bvl6" Jan 31 04:12:04 crc kubenswrapper[4667]: I0131 04:12:04.867172 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b971d20f-c021-4008-a666-1cfd5ea90764-config\") pod \"dnsmasq-dns-79bd4cc8c9-7bvl6\" (UID: \"b971d20f-c021-4008-a666-1cfd5ea90764\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-7bvl6" Jan 31 04:12:04 crc kubenswrapper[4667]: I0131 04:12:04.867193 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b971d20f-c021-4008-a666-1cfd5ea90764-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-7bvl6\" (UID: \"b971d20f-c021-4008-a666-1cfd5ea90764\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-7bvl6" Jan 31 04:12:04 crc kubenswrapper[4667]: I0131 04:12:04.867498 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b971d20f-c021-4008-a666-1cfd5ea90764-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-7bvl6\" (UID: \"b971d20f-c021-4008-a666-1cfd5ea90764\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-7bvl6" Jan 31 04:12:04 crc kubenswrapper[4667]: I0131 04:12:04.867960 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b971d20f-c021-4008-a666-1cfd5ea90764-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-7bvl6\" (UID: \"b971d20f-c021-4008-a666-1cfd5ea90764\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-7bvl6" Jan 31 04:12:04 crc kubenswrapper[4667]: I0131 04:12:04.956185 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg9vw\" (UniqueName: \"kubernetes.io/projected/b971d20f-c021-4008-a666-1cfd5ea90764-kube-api-access-wg9vw\") pod \"dnsmasq-dns-79bd4cc8c9-7bvl6\" (UID: \"b971d20f-c021-4008-a666-1cfd5ea90764\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-7bvl6" Jan 31 04:12:05 crc kubenswrapper[4667]: I0131 04:12:05.214651 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-7bvl6" Jan 31 04:12:05 crc kubenswrapper[4667]: I0131 04:12:05.280335 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"aca13392-5591-4b68-9948-c5e5fe558803","Type":"ContainerStarted","Data":"e31b870abf0376603347f49b0b9bf9ccb823de8a86d25a0c85b2525a0b52caf4"} Jan 31 04:12:05 crc kubenswrapper[4667]: I0131 04:12:05.295518 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9265013e-d7ee-49cf-a5d8-c2f80066f459" path="/var/lib/kubelet/pods/9265013e-d7ee-49cf-a5d8-c2f80066f459/volumes" Jan 31 04:12:05 crc kubenswrapper[4667]: I0131 04:12:05.296970 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"acadb76e-2e9d-4af4-a5d1-fb5f28b006c6","Type":"ContainerStarted","Data":"e9e72cc2458101c9030ba181ac93c7ec979c8ffb68342dfad07a5049305f4116"} Jan 31 04:12:05 crc kubenswrapper[4667]: I0131 04:12:05.777876 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-7bvl6"] Jan 31 04:12:06 crc kubenswrapper[4667]: I0131 04:12:06.297031 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-7bvl6" event={"ID":"b971d20f-c021-4008-a666-1cfd5ea90764","Type":"ContainerStarted","Data":"5462dd7cc7b63ab70838b5cab49f69881daa0772b090a06ce6f7a90167a05dc6"} Jan 31 04:12:07 crc kubenswrapper[4667]: I0131 04:12:07.307153 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"acadb76e-2e9d-4af4-a5d1-fb5f28b006c6","Type":"ContainerStarted","Data":"33c296e2df48bdf788ad1603574ec51626caa8d8c62b9d6d72ec14f9c864e043"} Jan 31 04:12:07 crc kubenswrapper[4667]: I0131 04:12:07.310904 4667 generic.go:334] "Generic (PLEG): container finished" podID="b971d20f-c021-4008-a666-1cfd5ea90764" containerID="2ca1fdfe610f2e28548d7f68ab3661d13c06661a744c92d620431a7282143832" exitCode=0 Jan 31 04:12:07 crc kubenswrapper[4667]: I0131 04:12:07.310948 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-7bvl6" event={"ID":"b971d20f-c021-4008-a666-1cfd5ea90764","Type":"ContainerDied","Data":"2ca1fdfe610f2e28548d7f68ab3661d13c06661a744c92d620431a7282143832"} Jan 31 04:12:08 crc kubenswrapper[4667]: I0131 04:12:08.323482 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-7bvl6" event={"ID":"b971d20f-c021-4008-a666-1cfd5ea90764","Type":"ContainerStarted","Data":"8f84f2abd83be74624f80616ad17aa082654338d021f528d6339ce59d646461d"} Jan 31 04:12:08 crc kubenswrapper[4667]: I0131 04:12:08.357317 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79bd4cc8c9-7bvl6" podStartSLOduration=4.357286815 podStartE2EDuration="4.357286815s" podCreationTimestamp="2026-01-31 04:12:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:12:08.355764964 +0000 UTC m=+1451.872100263" watchObservedRunningTime="2026-01-31 04:12:08.357286815 +0000 UTC m=+1451.873622124" Jan 31 04:12:09 crc kubenswrapper[4667]: I0131 04:12:09.101569 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-phkrx" podUID="a1a9dfcd-65ab-4c1e-af87-477f98da17cb" containerName="registry-server" probeResult="failure" output=< Jan 31 04:12:09 crc kubenswrapper[4667]: timeout: failed to connect service ":50051" within 1s Jan 31 04:12:09 crc kubenswrapper[4667]: > Jan 31 04:12:09 crc kubenswrapper[4667]: I0131 04:12:09.332677 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79bd4cc8c9-7bvl6" Jan 31 04:12:15 crc kubenswrapper[4667]: I0131 04:12:15.216089 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79bd4cc8c9-7bvl6" Jan 31 04:12:15 crc kubenswrapper[4667]: I0131 04:12:15.320367 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-zq45f"] Jan 31 04:12:15 crc kubenswrapper[4667]: I0131 04:12:15.320642 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-zq45f" podUID="3110271e-39e9-431e-a5dd-880758179c6c" containerName="dnsmasq-dns" containerID="cri-o://966be1fbd42996f3a22e285c0682af2d33ea60b5652cb469a3dc2a2b9a75e8c5" gracePeriod=10 Jan 31 04:12:15 crc kubenswrapper[4667]: I0131 04:12:15.704238 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:12:15 crc kubenswrapper[4667]: I0131 04:12:15.704710 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:12:15 crc kubenswrapper[4667]: I0131 04:12:15.718511 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f4d4c4b7-jpndh"] Jan 31 04:12:15 crc kubenswrapper[4667]: I0131 04:12:15.721081 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f4d4c4b7-jpndh" Jan 31 04:12:15 crc kubenswrapper[4667]: I0131 04:12:15.732784 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f4d4c4b7-jpndh"] Jan 31 04:12:15 crc kubenswrapper[4667]: I0131 04:12:15.833270 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab755590-ad93-4840-b261-9317b1c0cb54-dns-svc\") pod \"dnsmasq-dns-f4d4c4b7-jpndh\" (UID: \"ab755590-ad93-4840-b261-9317b1c0cb54\") " pod="openstack/dnsmasq-dns-f4d4c4b7-jpndh" Jan 31 04:12:15 crc kubenswrapper[4667]: I0131 04:12:15.833354 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ab755590-ad93-4840-b261-9317b1c0cb54-openstack-edpm-ipam\") pod \"dnsmasq-dns-f4d4c4b7-jpndh\" (UID: \"ab755590-ad93-4840-b261-9317b1c0cb54\") " pod="openstack/dnsmasq-dns-f4d4c4b7-jpndh" Jan 31 04:12:15 crc kubenswrapper[4667]: I0131 04:12:15.833382 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab755590-ad93-4840-b261-9317b1c0cb54-ovsdbserver-nb\") pod \"dnsmasq-dns-f4d4c4b7-jpndh\" (UID: \"ab755590-ad93-4840-b261-9317b1c0cb54\") " pod="openstack/dnsmasq-dns-f4d4c4b7-jpndh" Jan 31 04:12:15 crc kubenswrapper[4667]: I0131 04:12:15.833580 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdk7d\" (UniqueName: \"kubernetes.io/projected/ab755590-ad93-4840-b261-9317b1c0cb54-kube-api-access-kdk7d\") pod \"dnsmasq-dns-f4d4c4b7-jpndh\" (UID: \"ab755590-ad93-4840-b261-9317b1c0cb54\") " pod="openstack/dnsmasq-dns-f4d4c4b7-jpndh" Jan 31 04:12:15 crc kubenswrapper[4667]: I0131 04:12:15.833807 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab755590-ad93-4840-b261-9317b1c0cb54-dns-swift-storage-0\") pod \"dnsmasq-dns-f4d4c4b7-jpndh\" (UID: \"ab755590-ad93-4840-b261-9317b1c0cb54\") " pod="openstack/dnsmasq-dns-f4d4c4b7-jpndh" Jan 31 04:12:15 crc kubenswrapper[4667]: I0131 04:12:15.834025 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab755590-ad93-4840-b261-9317b1c0cb54-config\") pod \"dnsmasq-dns-f4d4c4b7-jpndh\" (UID: \"ab755590-ad93-4840-b261-9317b1c0cb54\") " pod="openstack/dnsmasq-dns-f4d4c4b7-jpndh" Jan 31 04:12:15 crc kubenswrapper[4667]: I0131 04:12:15.834936 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab755590-ad93-4840-b261-9317b1c0cb54-ovsdbserver-sb\") pod \"dnsmasq-dns-f4d4c4b7-jpndh\" (UID: \"ab755590-ad93-4840-b261-9317b1c0cb54\") " pod="openstack/dnsmasq-dns-f4d4c4b7-jpndh" Jan 31 04:12:15 crc kubenswrapper[4667]: I0131 04:12:15.936551 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab755590-ad93-4840-b261-9317b1c0cb54-config\") pod \"dnsmasq-dns-f4d4c4b7-jpndh\" (UID: \"ab755590-ad93-4840-b261-9317b1c0cb54\") " pod="openstack/dnsmasq-dns-f4d4c4b7-jpndh" Jan 31 04:12:15 crc kubenswrapper[4667]: I0131 04:12:15.936639 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab755590-ad93-4840-b261-9317b1c0cb54-ovsdbserver-sb\") pod \"dnsmasq-dns-f4d4c4b7-jpndh\" (UID: \"ab755590-ad93-4840-b261-9317b1c0cb54\") " pod="openstack/dnsmasq-dns-f4d4c4b7-jpndh" Jan 31 04:12:15 crc kubenswrapper[4667]: I0131 04:12:15.936691 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab755590-ad93-4840-b261-9317b1c0cb54-dns-svc\") pod \"dnsmasq-dns-f4d4c4b7-jpndh\" (UID: \"ab755590-ad93-4840-b261-9317b1c0cb54\") " pod="openstack/dnsmasq-dns-f4d4c4b7-jpndh" Jan 31 04:12:15 crc kubenswrapper[4667]: I0131 04:12:15.936737 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ab755590-ad93-4840-b261-9317b1c0cb54-openstack-edpm-ipam\") pod \"dnsmasq-dns-f4d4c4b7-jpndh\" (UID: \"ab755590-ad93-4840-b261-9317b1c0cb54\") " pod="openstack/dnsmasq-dns-f4d4c4b7-jpndh" Jan 31 04:12:15 crc kubenswrapper[4667]: I0131 04:12:15.936762 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab755590-ad93-4840-b261-9317b1c0cb54-ovsdbserver-nb\") pod \"dnsmasq-dns-f4d4c4b7-jpndh\" (UID: \"ab755590-ad93-4840-b261-9317b1c0cb54\") " pod="openstack/dnsmasq-dns-f4d4c4b7-jpndh" Jan 31 04:12:15 crc kubenswrapper[4667]: I0131 04:12:15.936787 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdk7d\" (UniqueName: \"kubernetes.io/projected/ab755590-ad93-4840-b261-9317b1c0cb54-kube-api-access-kdk7d\") pod \"dnsmasq-dns-f4d4c4b7-jpndh\" (UID: \"ab755590-ad93-4840-b261-9317b1c0cb54\") " pod="openstack/dnsmasq-dns-f4d4c4b7-jpndh" Jan 31 04:12:15 crc kubenswrapper[4667]: I0131 04:12:15.936824 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab755590-ad93-4840-b261-9317b1c0cb54-dns-swift-storage-0\") pod \"dnsmasq-dns-f4d4c4b7-jpndh\" (UID: \"ab755590-ad93-4840-b261-9317b1c0cb54\") " pod="openstack/dnsmasq-dns-f4d4c4b7-jpndh" Jan 31 04:12:15 crc kubenswrapper[4667]: I0131 04:12:15.938412 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab755590-ad93-4840-b261-9317b1c0cb54-config\") pod \"dnsmasq-dns-f4d4c4b7-jpndh\" (UID: \"ab755590-ad93-4840-b261-9317b1c0cb54\") " pod="openstack/dnsmasq-dns-f4d4c4b7-jpndh" Jan 31 04:12:15 crc kubenswrapper[4667]: I0131 04:12:15.940016 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab755590-ad93-4840-b261-9317b1c0cb54-ovsdbserver-sb\") pod \"dnsmasq-dns-f4d4c4b7-jpndh\" (UID: \"ab755590-ad93-4840-b261-9317b1c0cb54\") " pod="openstack/dnsmasq-dns-f4d4c4b7-jpndh" Jan 31 04:12:15 crc kubenswrapper[4667]: I0131 04:12:15.940158 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab755590-ad93-4840-b261-9317b1c0cb54-dns-svc\") pod \"dnsmasq-dns-f4d4c4b7-jpndh\" (UID: \"ab755590-ad93-4840-b261-9317b1c0cb54\") " pod="openstack/dnsmasq-dns-f4d4c4b7-jpndh" Jan 31 04:12:15 crc kubenswrapper[4667]: I0131 04:12:15.940236 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab755590-ad93-4840-b261-9317b1c0cb54-ovsdbserver-nb\") pod \"dnsmasq-dns-f4d4c4b7-jpndh\" (UID: \"ab755590-ad93-4840-b261-9317b1c0cb54\") " pod="openstack/dnsmasq-dns-f4d4c4b7-jpndh" Jan 31 04:12:15 crc kubenswrapper[4667]: I0131 04:12:15.940877 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ab755590-ad93-4840-b261-9317b1c0cb54-openstack-edpm-ipam\") pod \"dnsmasq-dns-f4d4c4b7-jpndh\" (UID: \"ab755590-ad93-4840-b261-9317b1c0cb54\") " pod="openstack/dnsmasq-dns-f4d4c4b7-jpndh" Jan 31 04:12:15 crc kubenswrapper[4667]: I0131 04:12:15.941115 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab755590-ad93-4840-b261-9317b1c0cb54-dns-swift-storage-0\") pod \"dnsmasq-dns-f4d4c4b7-jpndh\" (UID: \"ab755590-ad93-4840-b261-9317b1c0cb54\") " pod="openstack/dnsmasq-dns-f4d4c4b7-jpndh" Jan 31 04:12:15 crc kubenswrapper[4667]: I0131 04:12:15.952622 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-zq45f" Jan 31 04:12:15 crc kubenswrapper[4667]: I0131 04:12:15.967571 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdk7d\" (UniqueName: \"kubernetes.io/projected/ab755590-ad93-4840-b261-9317b1c0cb54-kube-api-access-kdk7d\") pod \"dnsmasq-dns-f4d4c4b7-jpndh\" (UID: \"ab755590-ad93-4840-b261-9317b1c0cb54\") " pod="openstack/dnsmasq-dns-f4d4c4b7-jpndh" Jan 31 04:12:16 crc kubenswrapper[4667]: I0131 04:12:16.044855 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3110271e-39e9-431e-a5dd-880758179c6c-ovsdbserver-nb\") pod \"3110271e-39e9-431e-a5dd-880758179c6c\" (UID: \"3110271e-39e9-431e-a5dd-880758179c6c\") " Jan 31 04:12:16 crc kubenswrapper[4667]: I0131 04:12:16.045447 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mggr9\" (UniqueName: \"kubernetes.io/projected/3110271e-39e9-431e-a5dd-880758179c6c-kube-api-access-mggr9\") pod \"3110271e-39e9-431e-a5dd-880758179c6c\" (UID: \"3110271e-39e9-431e-a5dd-880758179c6c\") " Jan 31 04:12:16 crc kubenswrapper[4667]: I0131 04:12:16.045536 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3110271e-39e9-431e-a5dd-880758179c6c-config\") pod \"3110271e-39e9-431e-a5dd-880758179c6c\" (UID: \"3110271e-39e9-431e-a5dd-880758179c6c\") " Jan 31 04:12:16 crc kubenswrapper[4667]: I0131 04:12:16.045695 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3110271e-39e9-431e-a5dd-880758179c6c-ovsdbserver-sb\") pod \"3110271e-39e9-431e-a5dd-880758179c6c\" (UID: \"3110271e-39e9-431e-a5dd-880758179c6c\") " Jan 31 04:12:16 crc kubenswrapper[4667]: I0131 04:12:16.045778 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3110271e-39e9-431e-a5dd-880758179c6c-dns-svc\") pod \"3110271e-39e9-431e-a5dd-880758179c6c\" (UID: \"3110271e-39e9-431e-a5dd-880758179c6c\") " Jan 31 04:12:16 crc kubenswrapper[4667]: I0131 04:12:16.045892 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3110271e-39e9-431e-a5dd-880758179c6c-dns-swift-storage-0\") pod \"3110271e-39e9-431e-a5dd-880758179c6c\" (UID: \"3110271e-39e9-431e-a5dd-880758179c6c\") " Jan 31 04:12:16 crc kubenswrapper[4667]: I0131 04:12:16.063347 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f4d4c4b7-jpndh" Jan 31 04:12:16 crc kubenswrapper[4667]: I0131 04:12:16.084180 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3110271e-39e9-431e-a5dd-880758179c6c-kube-api-access-mggr9" (OuterVolumeSpecName: "kube-api-access-mggr9") pod "3110271e-39e9-431e-a5dd-880758179c6c" (UID: "3110271e-39e9-431e-a5dd-880758179c6c"). InnerVolumeSpecName "kube-api-access-mggr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:12:16 crc kubenswrapper[4667]: I0131 04:12:16.152327 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mggr9\" (UniqueName: \"kubernetes.io/projected/3110271e-39e9-431e-a5dd-880758179c6c-kube-api-access-mggr9\") on node \"crc\" DevicePath \"\"" Jan 31 04:12:16 crc kubenswrapper[4667]: I0131 04:12:16.159767 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3110271e-39e9-431e-a5dd-880758179c6c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3110271e-39e9-431e-a5dd-880758179c6c" (UID: "3110271e-39e9-431e-a5dd-880758179c6c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:12:16 crc kubenswrapper[4667]: I0131 04:12:16.169277 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3110271e-39e9-431e-a5dd-880758179c6c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3110271e-39e9-431e-a5dd-880758179c6c" (UID: "3110271e-39e9-431e-a5dd-880758179c6c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:12:16 crc kubenswrapper[4667]: I0131 04:12:16.172911 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3110271e-39e9-431e-a5dd-880758179c6c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3110271e-39e9-431e-a5dd-880758179c6c" (UID: "3110271e-39e9-431e-a5dd-880758179c6c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:12:16 crc kubenswrapper[4667]: I0131 04:12:16.184471 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3110271e-39e9-431e-a5dd-880758179c6c-config" (OuterVolumeSpecName: "config") pod "3110271e-39e9-431e-a5dd-880758179c6c" (UID: "3110271e-39e9-431e-a5dd-880758179c6c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:12:16 crc kubenswrapper[4667]: I0131 04:12:16.206060 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3110271e-39e9-431e-a5dd-880758179c6c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3110271e-39e9-431e-a5dd-880758179c6c" (UID: "3110271e-39e9-431e-a5dd-880758179c6c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:12:16 crc kubenswrapper[4667]: I0131 04:12:16.254151 4667 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3110271e-39e9-431e-a5dd-880758179c6c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 04:12:16 crc kubenswrapper[4667]: I0131 04:12:16.254189 4667 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3110271e-39e9-431e-a5dd-880758179c6c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 04:12:16 crc kubenswrapper[4667]: I0131 04:12:16.254199 4667 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3110271e-39e9-431e-a5dd-880758179c6c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 31 04:12:16 crc kubenswrapper[4667]: I0131 04:12:16.254210 4667 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3110271e-39e9-431e-a5dd-880758179c6c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 04:12:16 crc kubenswrapper[4667]: I0131 04:12:16.254218 4667 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3110271e-39e9-431e-a5dd-880758179c6c-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:12:16 crc kubenswrapper[4667]: I0131 04:12:16.403003 4667 generic.go:334] "Generic (PLEG): container finished" podID="3110271e-39e9-431e-a5dd-880758179c6c" containerID="966be1fbd42996f3a22e285c0682af2d33ea60b5652cb469a3dc2a2b9a75e8c5" exitCode=0 Jan 31 04:12:16 crc kubenswrapper[4667]: I0131 04:12:16.403052 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-zq45f" event={"ID":"3110271e-39e9-431e-a5dd-880758179c6c","Type":"ContainerDied","Data":"966be1fbd42996f3a22e285c0682af2d33ea60b5652cb469a3dc2a2b9a75e8c5"} Jan 31 04:12:16 crc kubenswrapper[4667]: I0131 04:12:16.403084 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-zq45f" event={"ID":"3110271e-39e9-431e-a5dd-880758179c6c","Type":"ContainerDied","Data":"9d09d7b3b9debff2199a2e05bfa578f16a7608f51590714bab879ade4de3a250"} Jan 31 04:12:16 crc kubenswrapper[4667]: I0131 04:12:16.403106 4667 scope.go:117] "RemoveContainer" containerID="966be1fbd42996f3a22e285c0682af2d33ea60b5652cb469a3dc2a2b9a75e8c5" Jan 31 04:12:16 crc kubenswrapper[4667]: I0131 04:12:16.403259 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-zq45f" Jan 31 04:12:16 crc kubenswrapper[4667]: I0131 04:12:16.440634 4667 scope.go:117] "RemoveContainer" containerID="0f5e51e10d43b469a556ebd91ce1fb8b1cc085160205ae9712b4efd0eeee893c" Jan 31 04:12:16 crc kubenswrapper[4667]: I0131 04:12:16.448305 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-zq45f"] Jan 31 04:12:16 crc kubenswrapper[4667]: I0131 04:12:16.464823 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-zq45f"] Jan 31 04:12:16 crc kubenswrapper[4667]: I0131 04:12:16.518892 4667 scope.go:117] "RemoveContainer" containerID="966be1fbd42996f3a22e285c0682af2d33ea60b5652cb469a3dc2a2b9a75e8c5" Jan 31 04:12:16 crc kubenswrapper[4667]: E0131 04:12:16.519535 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"966be1fbd42996f3a22e285c0682af2d33ea60b5652cb469a3dc2a2b9a75e8c5\": container with ID starting with 966be1fbd42996f3a22e285c0682af2d33ea60b5652cb469a3dc2a2b9a75e8c5 not found: ID does not exist" containerID="966be1fbd42996f3a22e285c0682af2d33ea60b5652cb469a3dc2a2b9a75e8c5" Jan 31 04:12:16 crc kubenswrapper[4667]: I0131 04:12:16.519592 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"966be1fbd42996f3a22e285c0682af2d33ea60b5652cb469a3dc2a2b9a75e8c5"} err="failed to get container status \"966be1fbd42996f3a22e285c0682af2d33ea60b5652cb469a3dc2a2b9a75e8c5\": rpc error: code = NotFound desc = could not find container \"966be1fbd42996f3a22e285c0682af2d33ea60b5652cb469a3dc2a2b9a75e8c5\": container with ID starting with 966be1fbd42996f3a22e285c0682af2d33ea60b5652cb469a3dc2a2b9a75e8c5 not found: ID does not exist" Jan 31 04:12:16 crc kubenswrapper[4667]: I0131 04:12:16.519630 4667 scope.go:117] "RemoveContainer" containerID="0f5e51e10d43b469a556ebd91ce1fb8b1cc085160205ae9712b4efd0eeee893c" Jan 31 04:12:16 crc kubenswrapper[4667]: E0131 04:12:16.521405 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f5e51e10d43b469a556ebd91ce1fb8b1cc085160205ae9712b4efd0eeee893c\": container with ID starting with 0f5e51e10d43b469a556ebd91ce1fb8b1cc085160205ae9712b4efd0eeee893c not found: ID does not exist" containerID="0f5e51e10d43b469a556ebd91ce1fb8b1cc085160205ae9712b4efd0eeee893c" Jan 31 04:12:16 crc kubenswrapper[4667]: I0131 04:12:16.521449 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f5e51e10d43b469a556ebd91ce1fb8b1cc085160205ae9712b4efd0eeee893c"} err="failed to get container status \"0f5e51e10d43b469a556ebd91ce1fb8b1cc085160205ae9712b4efd0eeee893c\": rpc error: code = NotFound desc = could not find container \"0f5e51e10d43b469a556ebd91ce1fb8b1cc085160205ae9712b4efd0eeee893c\": container with ID starting with 0f5e51e10d43b469a556ebd91ce1fb8b1cc085160205ae9712b4efd0eeee893c not found: ID does not exist" Jan 31 04:12:16 crc kubenswrapper[4667]: W0131 04:12:16.579652 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab755590_ad93_4840_b261_9317b1c0cb54.slice/crio-0a5314c7f0873f5f11cc9ac0b82d7aae253e163d33c1e45d35cf22d8a203f34f WatchSource:0}: Error finding container 0a5314c7f0873f5f11cc9ac0b82d7aae253e163d33c1e45d35cf22d8a203f34f: Status 404 returned error can't find the container with id 0a5314c7f0873f5f11cc9ac0b82d7aae253e163d33c1e45d35cf22d8a203f34f Jan 31 04:12:16 crc kubenswrapper[4667]: I0131 04:12:16.584875 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f4d4c4b7-jpndh"] Jan 31 04:12:17 crc kubenswrapper[4667]: I0131 04:12:17.293263 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3110271e-39e9-431e-a5dd-880758179c6c" path="/var/lib/kubelet/pods/3110271e-39e9-431e-a5dd-880758179c6c/volumes" Jan 31 04:12:17 crc kubenswrapper[4667]: I0131 04:12:17.415904 4667 generic.go:334] "Generic (PLEG): container finished" podID="ab755590-ad93-4840-b261-9317b1c0cb54" containerID="8d226393c01242cdef8bc7b9ea165b672dae7b776459140d3de8d98916c7ee09" exitCode=0 Jan 31 04:12:17 crc kubenswrapper[4667]: I0131 04:12:17.415972 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f4d4c4b7-jpndh" event={"ID":"ab755590-ad93-4840-b261-9317b1c0cb54","Type":"ContainerDied","Data":"8d226393c01242cdef8bc7b9ea165b672dae7b776459140d3de8d98916c7ee09"} Jan 31 04:12:17 crc kubenswrapper[4667]: I0131 04:12:17.416036 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f4d4c4b7-jpndh" event={"ID":"ab755590-ad93-4840-b261-9317b1c0cb54","Type":"ContainerStarted","Data":"0a5314c7f0873f5f11cc9ac0b82d7aae253e163d33c1e45d35cf22d8a203f34f"} Jan 31 04:12:18 crc kubenswrapper[4667]: I0131 04:12:18.432809 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f4d4c4b7-jpndh" event={"ID":"ab755590-ad93-4840-b261-9317b1c0cb54","Type":"ContainerStarted","Data":"dadf12d70930ebf760a32d579336eae07e605ac4e84064b74773c12084409ed6"} Jan 31 04:12:18 crc kubenswrapper[4667]: I0131 04:12:18.453492 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f4d4c4b7-jpndh" podStartSLOduration=3.453466622 podStartE2EDuration="3.453466622s" podCreationTimestamp="2026-01-31 04:12:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:12:18.452089436 +0000 UTC m=+1461.968424745" watchObservedRunningTime="2026-01-31 04:12:18.453466622 +0000 UTC m=+1461.969801921" Jan 31 04:12:19 crc kubenswrapper[4667]: I0131 04:12:19.107728 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-phkrx" podUID="a1a9dfcd-65ab-4c1e-af87-477f98da17cb" containerName="registry-server" probeResult="failure" output=< Jan 31 04:12:19 crc kubenswrapper[4667]: timeout: failed to connect service ":50051" within 1s Jan 31 04:12:19 crc kubenswrapper[4667]: > Jan 31 04:12:19 crc kubenswrapper[4667]: I0131 04:12:19.445574 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f4d4c4b7-jpndh" Jan 31 04:12:22 crc kubenswrapper[4667]: I0131 04:12:22.216043 4667 scope.go:117] "RemoveContainer" containerID="78e89763bceccbecfbe1202910c776c519e0d7c00395f274122add8460a8db1c" Jan 31 04:12:22 crc kubenswrapper[4667]: I0131 04:12:22.244426 4667 scope.go:117] "RemoveContainer" containerID="f5619ebae99cdb634e963f926a5321d80524970329dd65fff8723f0849bf7d3e" Jan 31 04:12:22 crc kubenswrapper[4667]: I0131 04:12:22.350207 4667 scope.go:117] "RemoveContainer" containerID="78e03a127cf37019a547972f09a4e97a6c89bf3b1a482407aa602e7a2e416f7b" Jan 31 04:12:22 crc kubenswrapper[4667]: I0131 04:12:22.385711 4667 scope.go:117] "RemoveContainer" containerID="3ff82ebcfa6f4e8664d3d0d067b26b896ceb57ef35b92a86460b6a27a3ab8ed1" Jan 31 04:12:26 crc kubenswrapper[4667]: I0131 04:12:26.066141 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f4d4c4b7-jpndh" Jan 31 04:12:26 crc kubenswrapper[4667]: I0131 04:12:26.142770 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-7bvl6"] Jan 31 04:12:26 crc kubenswrapper[4667]: I0131 04:12:26.143103 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79bd4cc8c9-7bvl6" podUID="b971d20f-c021-4008-a666-1cfd5ea90764" containerName="dnsmasq-dns" containerID="cri-o://8f84f2abd83be74624f80616ad17aa082654338d021f528d6339ce59d646461d" gracePeriod=10 Jan 31 04:12:26 crc kubenswrapper[4667]: I0131 04:12:26.559403 4667 generic.go:334] "Generic (PLEG): container finished" podID="b971d20f-c021-4008-a666-1cfd5ea90764" containerID="8f84f2abd83be74624f80616ad17aa082654338d021f528d6339ce59d646461d" exitCode=0 Jan 31 04:12:26 crc kubenswrapper[4667]: I0131 04:12:26.559512 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-7bvl6" event={"ID":"b971d20f-c021-4008-a666-1cfd5ea90764","Type":"ContainerDied","Data":"8f84f2abd83be74624f80616ad17aa082654338d021f528d6339ce59d646461d"} Jan 31 04:12:27 crc kubenswrapper[4667]: I0131 04:12:27.053436 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-7bvl6" Jan 31 04:12:27 crc kubenswrapper[4667]: I0131 04:12:27.143255 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b971d20f-c021-4008-a666-1cfd5ea90764-config\") pod \"b971d20f-c021-4008-a666-1cfd5ea90764\" (UID: \"b971d20f-c021-4008-a666-1cfd5ea90764\") " Jan 31 04:12:27 crc kubenswrapper[4667]: I0131 04:12:27.144451 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b971d20f-c021-4008-a666-1cfd5ea90764-dns-swift-storage-0\") pod \"b971d20f-c021-4008-a666-1cfd5ea90764\" (UID: \"b971d20f-c021-4008-a666-1cfd5ea90764\") " Jan 31 04:12:27 crc kubenswrapper[4667]: I0131 04:12:27.144578 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b971d20f-c021-4008-a666-1cfd5ea90764-ovsdbserver-sb\") pod \"b971d20f-c021-4008-a666-1cfd5ea90764\" (UID: \"b971d20f-c021-4008-a666-1cfd5ea90764\") " Jan 31 04:12:27 crc kubenswrapper[4667]: I0131 04:12:27.144634 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wg9vw\" (UniqueName: \"kubernetes.io/projected/b971d20f-c021-4008-a666-1cfd5ea90764-kube-api-access-wg9vw\") pod \"b971d20f-c021-4008-a666-1cfd5ea90764\" (UID: \"b971d20f-c021-4008-a666-1cfd5ea90764\") " Jan 31 04:12:27 crc kubenswrapper[4667]: I0131 04:12:27.144688 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b971d20f-c021-4008-a666-1cfd5ea90764-openstack-edpm-ipam\") pod \"b971d20f-c021-4008-a666-1cfd5ea90764\" (UID: \"b971d20f-c021-4008-a666-1cfd5ea90764\") " Jan 31 04:12:27 crc kubenswrapper[4667]: I0131 04:12:27.144810 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b971d20f-c021-4008-a666-1cfd5ea90764-dns-svc\") pod \"b971d20f-c021-4008-a666-1cfd5ea90764\" (UID: \"b971d20f-c021-4008-a666-1cfd5ea90764\") " Jan 31 04:12:27 crc kubenswrapper[4667]: I0131 04:12:27.144904 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b971d20f-c021-4008-a666-1cfd5ea90764-ovsdbserver-nb\") pod \"b971d20f-c021-4008-a666-1cfd5ea90764\" (UID: \"b971d20f-c021-4008-a666-1cfd5ea90764\") " Jan 31 04:12:27 crc kubenswrapper[4667]: I0131 04:12:27.167363 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b971d20f-c021-4008-a666-1cfd5ea90764-kube-api-access-wg9vw" (OuterVolumeSpecName: "kube-api-access-wg9vw") pod "b971d20f-c021-4008-a666-1cfd5ea90764" (UID: "b971d20f-c021-4008-a666-1cfd5ea90764"). InnerVolumeSpecName "kube-api-access-wg9vw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:12:27 crc kubenswrapper[4667]: I0131 04:12:27.209124 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b971d20f-c021-4008-a666-1cfd5ea90764-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b971d20f-c021-4008-a666-1cfd5ea90764" (UID: "b971d20f-c021-4008-a666-1cfd5ea90764"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:12:27 crc kubenswrapper[4667]: I0131 04:12:27.226282 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b971d20f-c021-4008-a666-1cfd5ea90764-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b971d20f-c021-4008-a666-1cfd5ea90764" (UID: "b971d20f-c021-4008-a666-1cfd5ea90764"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:12:27 crc kubenswrapper[4667]: I0131 04:12:27.228016 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b971d20f-c021-4008-a666-1cfd5ea90764-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b971d20f-c021-4008-a666-1cfd5ea90764" (UID: "b971d20f-c021-4008-a666-1cfd5ea90764"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:12:27 crc kubenswrapper[4667]: I0131 04:12:27.248235 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b971d20f-c021-4008-a666-1cfd5ea90764-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "b971d20f-c021-4008-a666-1cfd5ea90764" (UID: "b971d20f-c021-4008-a666-1cfd5ea90764"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:12:27 crc kubenswrapper[4667]: I0131 04:12:27.249718 4667 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b971d20f-c021-4008-a666-1cfd5ea90764-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 31 04:12:27 crc kubenswrapper[4667]: I0131 04:12:27.249739 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wg9vw\" (UniqueName: \"kubernetes.io/projected/b971d20f-c021-4008-a666-1cfd5ea90764-kube-api-access-wg9vw\") on node \"crc\" DevicePath \"\"" Jan 31 04:12:27 crc kubenswrapper[4667]: I0131 04:12:27.249748 4667 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b971d20f-c021-4008-a666-1cfd5ea90764-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:12:27 crc kubenswrapper[4667]: I0131 04:12:27.249762 4667 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b971d20f-c021-4008-a666-1cfd5ea90764-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 31 04:12:27 crc kubenswrapper[4667]: I0131 04:12:27.249771 4667 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b971d20f-c021-4008-a666-1cfd5ea90764-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 31 04:12:27 crc kubenswrapper[4667]: I0131 04:12:27.252486 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b971d20f-c021-4008-a666-1cfd5ea90764-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b971d20f-c021-4008-a666-1cfd5ea90764" (UID: "b971d20f-c021-4008-a666-1cfd5ea90764"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:12:27 crc kubenswrapper[4667]: I0131 04:12:27.274944 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b971d20f-c021-4008-a666-1cfd5ea90764-config" (OuterVolumeSpecName: "config") pod "b971d20f-c021-4008-a666-1cfd5ea90764" (UID: "b971d20f-c021-4008-a666-1cfd5ea90764"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:12:27 crc kubenswrapper[4667]: I0131 04:12:27.352250 4667 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b971d20f-c021-4008-a666-1cfd5ea90764-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:12:27 crc kubenswrapper[4667]: I0131 04:12:27.354602 4667 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b971d20f-c021-4008-a666-1cfd5ea90764-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 31 04:12:27 crc kubenswrapper[4667]: I0131 04:12:27.571049 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-7bvl6" event={"ID":"b971d20f-c021-4008-a666-1cfd5ea90764","Type":"ContainerDied","Data":"5462dd7cc7b63ab70838b5cab49f69881daa0772b090a06ce6f7a90167a05dc6"} Jan 31 04:12:27 crc kubenswrapper[4667]: I0131 04:12:27.571114 4667 scope.go:117] "RemoveContainer" containerID="8f84f2abd83be74624f80616ad17aa082654338d021f528d6339ce59d646461d" Jan 31 04:12:27 crc kubenswrapper[4667]: I0131 04:12:27.571131 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-7bvl6" Jan 31 04:12:27 crc kubenswrapper[4667]: I0131 04:12:27.600975 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-7bvl6"] Jan 31 04:12:27 crc kubenswrapper[4667]: I0131 04:12:27.605548 4667 scope.go:117] "RemoveContainer" containerID="2ca1fdfe610f2e28548d7f68ab3661d13c06661a744c92d620431a7282143832" Jan 31 04:12:27 crc kubenswrapper[4667]: I0131 04:12:27.612062 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-7bvl6"] Jan 31 04:12:28 crc kubenswrapper[4667]: I0131 04:12:28.109332 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-phkrx" Jan 31 04:12:28 crc kubenswrapper[4667]: I0131 04:12:28.179267 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-phkrx" Jan 31 04:12:28 crc kubenswrapper[4667]: I0131 04:12:28.356421 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-phkrx"] Jan 31 04:12:29 crc kubenswrapper[4667]: I0131 04:12:29.294360 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b971d20f-c021-4008-a666-1cfd5ea90764" path="/var/lib/kubelet/pods/b971d20f-c021-4008-a666-1cfd5ea90764/volumes" Jan 31 04:12:29 crc kubenswrapper[4667]: I0131 04:12:29.593052 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-phkrx" podUID="a1a9dfcd-65ab-4c1e-af87-477f98da17cb" containerName="registry-server" containerID="cri-o://b080c71dd8a3c9123d9859e2e5c028b9a3495dbe0d2682dc29aedbb2553d45ee" gracePeriod=2 Jan 31 04:12:30 crc kubenswrapper[4667]: I0131 04:12:30.113275 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-phkrx" Jan 31 04:12:30 crc kubenswrapper[4667]: I0131 04:12:30.217728 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1a9dfcd-65ab-4c1e-af87-477f98da17cb-catalog-content\") pod \"a1a9dfcd-65ab-4c1e-af87-477f98da17cb\" (UID: \"a1a9dfcd-65ab-4c1e-af87-477f98da17cb\") " Jan 31 04:12:30 crc kubenswrapper[4667]: I0131 04:12:30.217894 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvdzp\" (UniqueName: \"kubernetes.io/projected/a1a9dfcd-65ab-4c1e-af87-477f98da17cb-kube-api-access-bvdzp\") pod \"a1a9dfcd-65ab-4c1e-af87-477f98da17cb\" (UID: \"a1a9dfcd-65ab-4c1e-af87-477f98da17cb\") " Jan 31 04:12:30 crc kubenswrapper[4667]: I0131 04:12:30.218255 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1a9dfcd-65ab-4c1e-af87-477f98da17cb-utilities\") pod \"a1a9dfcd-65ab-4c1e-af87-477f98da17cb\" (UID: \"a1a9dfcd-65ab-4c1e-af87-477f98da17cb\") " Jan 31 04:12:30 crc kubenswrapper[4667]: I0131 04:12:30.219184 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1a9dfcd-65ab-4c1e-af87-477f98da17cb-utilities" (OuterVolumeSpecName: "utilities") pod "a1a9dfcd-65ab-4c1e-af87-477f98da17cb" (UID: "a1a9dfcd-65ab-4c1e-af87-477f98da17cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:12:30 crc kubenswrapper[4667]: I0131 04:12:30.219365 4667 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1a9dfcd-65ab-4c1e-af87-477f98da17cb-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:12:30 crc kubenswrapper[4667]: I0131 04:12:30.225881 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1a9dfcd-65ab-4c1e-af87-477f98da17cb-kube-api-access-bvdzp" (OuterVolumeSpecName: "kube-api-access-bvdzp") pod "a1a9dfcd-65ab-4c1e-af87-477f98da17cb" (UID: "a1a9dfcd-65ab-4c1e-af87-477f98da17cb"). InnerVolumeSpecName "kube-api-access-bvdzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:12:30 crc kubenswrapper[4667]: I0131 04:12:30.324677 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvdzp\" (UniqueName: \"kubernetes.io/projected/a1a9dfcd-65ab-4c1e-af87-477f98da17cb-kube-api-access-bvdzp\") on node \"crc\" DevicePath \"\"" Jan 31 04:12:30 crc kubenswrapper[4667]: I0131 04:12:30.345727 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1a9dfcd-65ab-4c1e-af87-477f98da17cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a1a9dfcd-65ab-4c1e-af87-477f98da17cb" (UID: "a1a9dfcd-65ab-4c1e-af87-477f98da17cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:12:30 crc kubenswrapper[4667]: I0131 04:12:30.427468 4667 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1a9dfcd-65ab-4c1e-af87-477f98da17cb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:12:30 crc kubenswrapper[4667]: I0131 04:12:30.606616 4667 generic.go:334] "Generic (PLEG): container finished" podID="a1a9dfcd-65ab-4c1e-af87-477f98da17cb" containerID="b080c71dd8a3c9123d9859e2e5c028b9a3495dbe0d2682dc29aedbb2553d45ee" exitCode=0 Jan 31 04:12:30 crc kubenswrapper[4667]: I0131 04:12:30.606671 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-phkrx" event={"ID":"a1a9dfcd-65ab-4c1e-af87-477f98da17cb","Type":"ContainerDied","Data":"b080c71dd8a3c9123d9859e2e5c028b9a3495dbe0d2682dc29aedbb2553d45ee"} Jan 31 04:12:30 crc kubenswrapper[4667]: I0131 04:12:30.606707 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-phkrx" event={"ID":"a1a9dfcd-65ab-4c1e-af87-477f98da17cb","Type":"ContainerDied","Data":"14f08f52ae6f56692b142ddf9b49cb0b7a7ebeb34bf4383aac9d9886e37d0267"} Jan 31 04:12:30 crc kubenswrapper[4667]: I0131 04:12:30.606728 4667 scope.go:117] "RemoveContainer" containerID="b080c71dd8a3c9123d9859e2e5c028b9a3495dbe0d2682dc29aedbb2553d45ee" Jan 31 04:12:30 crc kubenswrapper[4667]: I0131 04:12:30.607580 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-phkrx" Jan 31 04:12:30 crc kubenswrapper[4667]: I0131 04:12:30.662514 4667 scope.go:117] "RemoveContainer" containerID="3aea1f2fd52df559534b0bab8caa8fe7dbd7602201b4ef49349793b962489a92" Jan 31 04:12:30 crc kubenswrapper[4667]: I0131 04:12:30.663467 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-phkrx"] Jan 31 04:12:30 crc kubenswrapper[4667]: I0131 04:12:30.673891 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-phkrx"] Jan 31 04:12:30 crc kubenswrapper[4667]: I0131 04:12:30.702537 4667 scope.go:117] "RemoveContainer" containerID="fa317bd5ee64035f868efc45bd20f6e1cdfed6926a7c9c43875510716f14ecf8" Jan 31 04:12:30 crc kubenswrapper[4667]: I0131 04:12:30.747281 4667 scope.go:117] "RemoveContainer" containerID="b080c71dd8a3c9123d9859e2e5c028b9a3495dbe0d2682dc29aedbb2553d45ee" Jan 31 04:12:30 crc kubenswrapper[4667]: E0131 04:12:30.748415 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b080c71dd8a3c9123d9859e2e5c028b9a3495dbe0d2682dc29aedbb2553d45ee\": container with ID starting with b080c71dd8a3c9123d9859e2e5c028b9a3495dbe0d2682dc29aedbb2553d45ee not found: ID does not exist" containerID="b080c71dd8a3c9123d9859e2e5c028b9a3495dbe0d2682dc29aedbb2553d45ee" Jan 31 04:12:30 crc kubenswrapper[4667]: I0131 04:12:30.748503 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b080c71dd8a3c9123d9859e2e5c028b9a3495dbe0d2682dc29aedbb2553d45ee"} err="failed to get container status \"b080c71dd8a3c9123d9859e2e5c028b9a3495dbe0d2682dc29aedbb2553d45ee\": rpc error: code = NotFound desc = could not find container \"b080c71dd8a3c9123d9859e2e5c028b9a3495dbe0d2682dc29aedbb2553d45ee\": container with ID starting with b080c71dd8a3c9123d9859e2e5c028b9a3495dbe0d2682dc29aedbb2553d45ee not found: ID does not exist" Jan 31 04:12:30 crc kubenswrapper[4667]: I0131 04:12:30.748563 4667 scope.go:117] "RemoveContainer" containerID="3aea1f2fd52df559534b0bab8caa8fe7dbd7602201b4ef49349793b962489a92" Jan 31 04:12:30 crc kubenswrapper[4667]: E0131 04:12:30.749182 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3aea1f2fd52df559534b0bab8caa8fe7dbd7602201b4ef49349793b962489a92\": container with ID starting with 3aea1f2fd52df559534b0bab8caa8fe7dbd7602201b4ef49349793b962489a92 not found: ID does not exist" containerID="3aea1f2fd52df559534b0bab8caa8fe7dbd7602201b4ef49349793b962489a92" Jan 31 04:12:30 crc kubenswrapper[4667]: I0131 04:12:30.749224 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3aea1f2fd52df559534b0bab8caa8fe7dbd7602201b4ef49349793b962489a92"} err="failed to get container status \"3aea1f2fd52df559534b0bab8caa8fe7dbd7602201b4ef49349793b962489a92\": rpc error: code = NotFound desc = could not find container \"3aea1f2fd52df559534b0bab8caa8fe7dbd7602201b4ef49349793b962489a92\": container with ID starting with 3aea1f2fd52df559534b0bab8caa8fe7dbd7602201b4ef49349793b962489a92 not found: ID does not exist" Jan 31 04:12:30 crc kubenswrapper[4667]: I0131 04:12:30.749241 4667 scope.go:117] "RemoveContainer" containerID="fa317bd5ee64035f868efc45bd20f6e1cdfed6926a7c9c43875510716f14ecf8" Jan 31 04:12:30 crc kubenswrapper[4667]: E0131 04:12:30.749623 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa317bd5ee64035f868efc45bd20f6e1cdfed6926a7c9c43875510716f14ecf8\": container with ID starting with fa317bd5ee64035f868efc45bd20f6e1cdfed6926a7c9c43875510716f14ecf8 not found: ID does not exist" containerID="fa317bd5ee64035f868efc45bd20f6e1cdfed6926a7c9c43875510716f14ecf8" Jan 31 04:12:30 crc kubenswrapper[4667]: I0131 04:12:30.749670 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa317bd5ee64035f868efc45bd20f6e1cdfed6926a7c9c43875510716f14ecf8"} err="failed to get container status \"fa317bd5ee64035f868efc45bd20f6e1cdfed6926a7c9c43875510716f14ecf8\": rpc error: code = NotFound desc = could not find container \"fa317bd5ee64035f868efc45bd20f6e1cdfed6926a7c9c43875510716f14ecf8\": container with ID starting with fa317bd5ee64035f868efc45bd20f6e1cdfed6926a7c9c43875510716f14ecf8 not found: ID does not exist" Jan 31 04:12:31 crc kubenswrapper[4667]: I0131 04:12:31.298095 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1a9dfcd-65ab-4c1e-af87-477f98da17cb" path="/var/lib/kubelet/pods/a1a9dfcd-65ab-4c1e-af87-477f98da17cb/volumes" Jan 31 04:12:37 crc kubenswrapper[4667]: I0131 04:12:37.684766 4667 generic.go:334] "Generic (PLEG): container finished" podID="aca13392-5591-4b68-9948-c5e5fe558803" containerID="e31b870abf0376603347f49b0b9bf9ccb823de8a86d25a0c85b2525a0b52caf4" exitCode=0 Jan 31 04:12:37 crc kubenswrapper[4667]: I0131 04:12:37.684899 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"aca13392-5591-4b68-9948-c5e5fe558803","Type":"ContainerDied","Data":"e31b870abf0376603347f49b0b9bf9ccb823de8a86d25a0c85b2525a0b52caf4"} Jan 31 04:12:38 crc kubenswrapper[4667]: I0131 04:12:38.696172 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"aca13392-5591-4b68-9948-c5e5fe558803","Type":"ContainerStarted","Data":"23e7af3c384c54269861c6950f56a9d96f29c45cce568248f8e30949b0a5d2d5"} Jan 31 04:12:38 crc kubenswrapper[4667]: I0131 04:12:38.698098 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 31 04:12:38 crc kubenswrapper[4667]: I0131 04:12:38.725555 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.725532931000004 podStartE2EDuration="36.725532931s" podCreationTimestamp="2026-01-31 04:12:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:12:38.718420062 +0000 UTC m=+1482.234755381" watchObservedRunningTime="2026-01-31 04:12:38.725532931 +0000 UTC m=+1482.241868230" Jan 31 04:12:39 crc kubenswrapper[4667]: I0131 04:12:39.707972 4667 generic.go:334] "Generic (PLEG): container finished" podID="acadb76e-2e9d-4af4-a5d1-fb5f28b006c6" containerID="33c296e2df48bdf788ad1603574ec51626caa8d8c62b9d6d72ec14f9c864e043" exitCode=0 Jan 31 04:12:39 crc kubenswrapper[4667]: I0131 04:12:39.708021 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"acadb76e-2e9d-4af4-a5d1-fb5f28b006c6","Type":"ContainerDied","Data":"33c296e2df48bdf788ad1603574ec51626caa8d8c62b9d6d72ec14f9c864e043"} Jan 31 04:12:39 crc kubenswrapper[4667]: I0131 04:12:39.937180 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cfn5q"] Jan 31 04:12:39 crc kubenswrapper[4667]: E0131 04:12:39.937998 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1a9dfcd-65ab-4c1e-af87-477f98da17cb" containerName="extract-utilities" Jan 31 04:12:39 crc kubenswrapper[4667]: I0131 04:12:39.938016 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1a9dfcd-65ab-4c1e-af87-477f98da17cb" containerName="extract-utilities" Jan 31 04:12:39 crc kubenswrapper[4667]: E0131 04:12:39.938034 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1a9dfcd-65ab-4c1e-af87-477f98da17cb" containerName="registry-server" Jan 31 04:12:39 crc kubenswrapper[4667]: I0131 04:12:39.938040 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1a9dfcd-65ab-4c1e-af87-477f98da17cb" containerName="registry-server" Jan 31 04:12:39 crc kubenswrapper[4667]: E0131 04:12:39.938049 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b971d20f-c021-4008-a666-1cfd5ea90764" containerName="init" Jan 31 04:12:39 crc kubenswrapper[4667]: I0131 04:12:39.938055 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="b971d20f-c021-4008-a666-1cfd5ea90764" containerName="init" Jan 31 04:12:39 crc kubenswrapper[4667]: E0131 04:12:39.938070 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3110271e-39e9-431e-a5dd-880758179c6c" containerName="dnsmasq-dns" Jan 31 04:12:39 crc kubenswrapper[4667]: I0131 04:12:39.938076 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="3110271e-39e9-431e-a5dd-880758179c6c" containerName="dnsmasq-dns" Jan 31 04:12:39 crc kubenswrapper[4667]: E0131 04:12:39.938084 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3110271e-39e9-431e-a5dd-880758179c6c" containerName="init" Jan 31 04:12:39 crc kubenswrapper[4667]: I0131 04:12:39.938089 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="3110271e-39e9-431e-a5dd-880758179c6c" containerName="init" Jan 31 04:12:39 crc kubenswrapper[4667]: E0131 04:12:39.938108 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1a9dfcd-65ab-4c1e-af87-477f98da17cb" containerName="extract-content" Jan 31 04:12:39 crc kubenswrapper[4667]: I0131 04:12:39.938113 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1a9dfcd-65ab-4c1e-af87-477f98da17cb" containerName="extract-content" Jan 31 04:12:39 crc kubenswrapper[4667]: E0131 04:12:39.938127 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b971d20f-c021-4008-a666-1cfd5ea90764" containerName="dnsmasq-dns" Jan 31 04:12:39 crc kubenswrapper[4667]: I0131 04:12:39.938135 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="b971d20f-c021-4008-a666-1cfd5ea90764" containerName="dnsmasq-dns" Jan 31 04:12:39 crc kubenswrapper[4667]: I0131 04:12:39.938308 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1a9dfcd-65ab-4c1e-af87-477f98da17cb" containerName="registry-server" Jan 31 04:12:39 crc kubenswrapper[4667]: I0131 04:12:39.938325 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="3110271e-39e9-431e-a5dd-880758179c6c" containerName="dnsmasq-dns" Jan 31 04:12:39 crc kubenswrapper[4667]: I0131 04:12:39.938334 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="b971d20f-c021-4008-a666-1cfd5ea90764" containerName="dnsmasq-dns" Jan 31 04:12:39 crc kubenswrapper[4667]: I0131 04:12:39.939008 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cfn5q" Jan 31 04:12:39 crc kubenswrapper[4667]: I0131 04:12:39.942546 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 04:12:39 crc kubenswrapper[4667]: I0131 04:12:39.942798 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 04:12:39 crc kubenswrapper[4667]: I0131 04:12:39.942806 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z7p2q" Jan 31 04:12:39 crc kubenswrapper[4667]: I0131 04:12:39.942986 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 04:12:39 crc kubenswrapper[4667]: I0131 04:12:39.980435 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cfn5q"] Jan 31 04:12:40 crc kubenswrapper[4667]: I0131 04:12:40.134456 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/500e62ac-7319-4438-ab89-c072499f717c-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-cfn5q\" (UID: \"500e62ac-7319-4438-ab89-c072499f717c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cfn5q" Jan 31 04:12:40 crc kubenswrapper[4667]: I0131 04:12:40.134554 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/500e62ac-7319-4438-ab89-c072499f717c-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-cfn5q\" (UID: \"500e62ac-7319-4438-ab89-c072499f717c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cfn5q" Jan 31 04:12:40 crc kubenswrapper[4667]: I0131 04:12:40.135979 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/500e62ac-7319-4438-ab89-c072499f717c-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-cfn5q\" (UID: \"500e62ac-7319-4438-ab89-c072499f717c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cfn5q" Jan 31 04:12:40 crc kubenswrapper[4667]: I0131 04:12:40.136037 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnnbr\" (UniqueName: \"kubernetes.io/projected/500e62ac-7319-4438-ab89-c072499f717c-kube-api-access-lnnbr\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-cfn5q\" (UID: \"500e62ac-7319-4438-ab89-c072499f717c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cfn5q" Jan 31 04:12:40 crc kubenswrapper[4667]: I0131 04:12:40.237486 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/500e62ac-7319-4438-ab89-c072499f717c-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-cfn5q\" (UID: \"500e62ac-7319-4438-ab89-c072499f717c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cfn5q" Jan 31 04:12:40 crc kubenswrapper[4667]: I0131 04:12:40.237856 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnnbr\" (UniqueName: \"kubernetes.io/projected/500e62ac-7319-4438-ab89-c072499f717c-kube-api-access-lnnbr\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-cfn5q\" (UID: \"500e62ac-7319-4438-ab89-c072499f717c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cfn5q" Jan 31 04:12:40 crc kubenswrapper[4667]: I0131 04:12:40.238029 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/500e62ac-7319-4438-ab89-c072499f717c-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-cfn5q\" (UID: \"500e62ac-7319-4438-ab89-c072499f717c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cfn5q" Jan 31 04:12:40 crc kubenswrapper[4667]: I0131 04:12:40.238160 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/500e62ac-7319-4438-ab89-c072499f717c-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-cfn5q\" (UID: \"500e62ac-7319-4438-ab89-c072499f717c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cfn5q" Jan 31 04:12:40 crc kubenswrapper[4667]: I0131 04:12:40.243684 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/500e62ac-7319-4438-ab89-c072499f717c-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-cfn5q\" (UID: \"500e62ac-7319-4438-ab89-c072499f717c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cfn5q" Jan 31 04:12:40 crc kubenswrapper[4667]: I0131 04:12:40.246969 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/500e62ac-7319-4438-ab89-c072499f717c-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-cfn5q\" (UID: \"500e62ac-7319-4438-ab89-c072499f717c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cfn5q" Jan 31 04:12:40 crc kubenswrapper[4667]: I0131 04:12:40.247317 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/500e62ac-7319-4438-ab89-c072499f717c-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-cfn5q\" (UID: \"500e62ac-7319-4438-ab89-c072499f717c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cfn5q" Jan 31 04:12:40 crc kubenswrapper[4667]: I0131 04:12:40.258824 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnnbr\" (UniqueName: \"kubernetes.io/projected/500e62ac-7319-4438-ab89-c072499f717c-kube-api-access-lnnbr\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-cfn5q\" (UID: \"500e62ac-7319-4438-ab89-c072499f717c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cfn5q" Jan 31 04:12:40 crc kubenswrapper[4667]: I0131 04:12:40.259431 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cfn5q" Jan 31 04:12:40 crc kubenswrapper[4667]: I0131 04:12:40.719441 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"acadb76e-2e9d-4af4-a5d1-fb5f28b006c6","Type":"ContainerStarted","Data":"a6eb304505a08c2b31c20b7e121a7fb43b96664e93604f29248776f653e2680a"} Jan 31 04:12:40 crc kubenswrapper[4667]: I0131 04:12:40.721346 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:12:40 crc kubenswrapper[4667]: I0131 04:12:40.752709 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.752689426 podStartE2EDuration="37.752689426s" podCreationTimestamp="2026-01-31 04:12:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:12:40.745053383 +0000 UTC m=+1484.261388702" watchObservedRunningTime="2026-01-31 04:12:40.752689426 +0000 UTC m=+1484.269024725" Jan 31 04:12:41 crc kubenswrapper[4667]: I0131 04:12:41.232654 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cfn5q"] Jan 31 04:12:41 crc kubenswrapper[4667]: W0131 04:12:41.239075 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod500e62ac_7319_4438_ab89_c072499f717c.slice/crio-6f77abefe5add1e8299b705b117d554ce63d8ddd9f3669885946fcf35dcb2dea WatchSource:0}: Error finding container 6f77abefe5add1e8299b705b117d554ce63d8ddd9f3669885946fcf35dcb2dea: Status 404 returned error can't find the container with id 6f77abefe5add1e8299b705b117d554ce63d8ddd9f3669885946fcf35dcb2dea Jan 31 04:12:41 crc kubenswrapper[4667]: I0131 04:12:41.731707 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cfn5q" event={"ID":"500e62ac-7319-4438-ab89-c072499f717c","Type":"ContainerStarted","Data":"6f77abefe5add1e8299b705b117d554ce63d8ddd9f3669885946fcf35dcb2dea"} Jan 31 04:12:45 crc kubenswrapper[4667]: I0131 04:12:45.704655 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:12:45 crc kubenswrapper[4667]: I0131 04:12:45.705319 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:12:45 crc kubenswrapper[4667]: I0131 04:12:45.705368 4667 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" Jan 31 04:12:45 crc kubenswrapper[4667]: I0131 04:12:45.705910 4667 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f2541fc2fda6b826061d737e4a0c482f1977e25566cf6f78f58956c4922322ef"} pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 04:12:45 crc kubenswrapper[4667]: I0131 04:12:45.705962 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" containerID="cri-o://f2541fc2fda6b826061d737e4a0c482f1977e25566cf6f78f58956c4922322ef" gracePeriod=600 Jan 31 04:12:46 crc kubenswrapper[4667]: I0131 04:12:46.799444 4667 generic.go:334] "Generic (PLEG): container finished" podID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerID="f2541fc2fda6b826061d737e4a0c482f1977e25566cf6f78f58956c4922322ef" exitCode=0 Jan 31 04:12:46 crc kubenswrapper[4667]: I0131 04:12:46.799604 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" event={"ID":"b103bbd2-fb5d-4b2a-8b01-c32f699757df","Type":"ContainerDied","Data":"f2541fc2fda6b826061d737e4a0c482f1977e25566cf6f78f58956c4922322ef"} Jan 31 04:12:46 crc kubenswrapper[4667]: I0131 04:12:46.799808 4667 scope.go:117] "RemoveContainer" containerID="4def2f985a42835fdac5d21069cf64f18010ecd6521e48ae16ef15b594559e50" Jan 31 04:12:52 crc kubenswrapper[4667]: I0131 04:12:52.735611 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="aca13392-5591-4b68-9948-c5e5fe558803" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.211:5671: connect: connection refused" Jan 31 04:12:52 crc kubenswrapper[4667]: I0131 04:12:52.883344 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" event={"ID":"b103bbd2-fb5d-4b2a-8b01-c32f699757df","Type":"ContainerStarted","Data":"52796184d23595b846472c11c5dceaaa8d9b03476b3cbc4f47edf0ad21ac1e50"} Jan 31 04:12:52 crc kubenswrapper[4667]: I0131 04:12:52.885937 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cfn5q" event={"ID":"500e62ac-7319-4438-ab89-c072499f717c","Type":"ContainerStarted","Data":"76e40d514fde1e90a170bca9c3fe512f1f2c494a39c3300994872fa00904ea61"} Jan 31 04:12:53 crc kubenswrapper[4667]: I0131 04:12:52.999481 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cfn5q" podStartSLOduration=2.7787964450000002 podStartE2EDuration="13.999451377s" podCreationTimestamp="2026-01-31 04:12:39 +0000 UTC" firstStartedPulling="2026-01-31 04:12:41.240965404 +0000 UTC m=+1484.757300703" lastFinishedPulling="2026-01-31 04:12:52.461620336 +0000 UTC m=+1495.977955635" observedRunningTime="2026-01-31 04:12:52.963202128 +0000 UTC m=+1496.479537417" watchObservedRunningTime="2026-01-31 04:12:52.999451377 +0000 UTC m=+1496.515786676" Jan 31 04:12:53 crc kubenswrapper[4667]: I0131 04:12:53.804018 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 31 04:13:02 crc kubenswrapper[4667]: I0131 04:13:02.736733 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 31 04:13:04 crc kubenswrapper[4667]: I0131 04:13:04.013044 4667 generic.go:334] "Generic (PLEG): container finished" podID="500e62ac-7319-4438-ab89-c072499f717c" containerID="76e40d514fde1e90a170bca9c3fe512f1f2c494a39c3300994872fa00904ea61" exitCode=0 Jan 31 04:13:04 crc kubenswrapper[4667]: I0131 04:13:04.013133 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cfn5q" event={"ID":"500e62ac-7319-4438-ab89-c072499f717c","Type":"ContainerDied","Data":"76e40d514fde1e90a170bca9c3fe512f1f2c494a39c3300994872fa00904ea61"} Jan 31 04:13:05 crc kubenswrapper[4667]: I0131 04:13:05.530224 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cfn5q" Jan 31 04:13:05 crc kubenswrapper[4667]: I0131 04:13:05.544780 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/500e62ac-7319-4438-ab89-c072499f717c-inventory\") pod \"500e62ac-7319-4438-ab89-c072499f717c\" (UID: \"500e62ac-7319-4438-ab89-c072499f717c\") " Jan 31 04:13:05 crc kubenswrapper[4667]: I0131 04:13:05.544873 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/500e62ac-7319-4438-ab89-c072499f717c-ssh-key-openstack-edpm-ipam\") pod \"500e62ac-7319-4438-ab89-c072499f717c\" (UID: \"500e62ac-7319-4438-ab89-c072499f717c\") " Jan 31 04:13:05 crc kubenswrapper[4667]: I0131 04:13:05.545007 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnnbr\" (UniqueName: \"kubernetes.io/projected/500e62ac-7319-4438-ab89-c072499f717c-kube-api-access-lnnbr\") pod \"500e62ac-7319-4438-ab89-c072499f717c\" (UID: \"500e62ac-7319-4438-ab89-c072499f717c\") " Jan 31 04:13:05 crc kubenswrapper[4667]: I0131 04:13:05.545145 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/500e62ac-7319-4438-ab89-c072499f717c-repo-setup-combined-ca-bundle\") pod \"500e62ac-7319-4438-ab89-c072499f717c\" (UID: \"500e62ac-7319-4438-ab89-c072499f717c\") " Jan 31 04:13:05 crc kubenswrapper[4667]: I0131 04:13:05.563177 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/500e62ac-7319-4438-ab89-c072499f717c-kube-api-access-lnnbr" (OuterVolumeSpecName: "kube-api-access-lnnbr") pod "500e62ac-7319-4438-ab89-c072499f717c" (UID: "500e62ac-7319-4438-ab89-c072499f717c"). InnerVolumeSpecName "kube-api-access-lnnbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:13:05 crc kubenswrapper[4667]: I0131 04:13:05.566212 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/500e62ac-7319-4438-ab89-c072499f717c-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "500e62ac-7319-4438-ab89-c072499f717c" (UID: "500e62ac-7319-4438-ab89-c072499f717c"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:13:05 crc kubenswrapper[4667]: I0131 04:13:05.648759 4667 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/500e62ac-7319-4438-ab89-c072499f717c-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:13:05 crc kubenswrapper[4667]: I0131 04:13:05.652817 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnnbr\" (UniqueName: \"kubernetes.io/projected/500e62ac-7319-4438-ab89-c072499f717c-kube-api-access-lnnbr\") on node \"crc\" DevicePath \"\"" Jan 31 04:13:05 crc kubenswrapper[4667]: I0131 04:13:05.664469 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/500e62ac-7319-4438-ab89-c072499f717c-inventory" (OuterVolumeSpecName: "inventory") pod "500e62ac-7319-4438-ab89-c072499f717c" (UID: "500e62ac-7319-4438-ab89-c072499f717c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:13:05 crc kubenswrapper[4667]: I0131 04:13:05.701986 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/500e62ac-7319-4438-ab89-c072499f717c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "500e62ac-7319-4438-ab89-c072499f717c" (UID: "500e62ac-7319-4438-ab89-c072499f717c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:13:05 crc kubenswrapper[4667]: I0131 04:13:05.754593 4667 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/500e62ac-7319-4438-ab89-c072499f717c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:13:05 crc kubenswrapper[4667]: I0131 04:13:05.754645 4667 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/500e62ac-7319-4438-ab89-c072499f717c-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 04:13:06 crc kubenswrapper[4667]: I0131 04:13:06.037785 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cfn5q" event={"ID":"500e62ac-7319-4438-ab89-c072499f717c","Type":"ContainerDied","Data":"6f77abefe5add1e8299b705b117d554ce63d8ddd9f3669885946fcf35dcb2dea"} Jan 31 04:13:06 crc kubenswrapper[4667]: I0131 04:13:06.037871 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f77abefe5add1e8299b705b117d554ce63d8ddd9f3669885946fcf35dcb2dea" Jan 31 04:13:06 crc kubenswrapper[4667]: I0131 04:13:06.038002 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cfn5q" Jan 31 04:13:06 crc kubenswrapper[4667]: I0131 04:13:06.152901 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-mkct2"] Jan 31 04:13:06 crc kubenswrapper[4667]: E0131 04:13:06.158012 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="500e62ac-7319-4438-ab89-c072499f717c" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 31 04:13:06 crc kubenswrapper[4667]: I0131 04:13:06.158263 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="500e62ac-7319-4438-ab89-c072499f717c" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 31 04:13:06 crc kubenswrapper[4667]: I0131 04:13:06.158727 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="500e62ac-7319-4438-ab89-c072499f717c" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 31 04:13:06 crc kubenswrapper[4667]: I0131 04:13:06.160093 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mkct2" Jan 31 04:13:06 crc kubenswrapper[4667]: I0131 04:13:06.164667 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z7p2q" Jan 31 04:13:06 crc kubenswrapper[4667]: I0131 04:13:06.168758 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 04:13:06 crc kubenswrapper[4667]: I0131 04:13:06.169519 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 04:13:06 crc kubenswrapper[4667]: I0131 04:13:06.169875 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-mkct2"] Jan 31 04:13:06 crc kubenswrapper[4667]: I0131 04:13:06.171582 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 04:13:06 crc kubenswrapper[4667]: I0131 04:13:06.266293 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt77p\" (UniqueName: \"kubernetes.io/projected/c6e23bd4-49c8-4691-ab45-5426e6c3cc6f-kube-api-access-tt77p\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mkct2\" (UID: \"c6e23bd4-49c8-4691-ab45-5426e6c3cc6f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mkct2" Jan 31 04:13:06 crc kubenswrapper[4667]: I0131 04:13:06.266705 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6e23bd4-49c8-4691-ab45-5426e6c3cc6f-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mkct2\" (UID: \"c6e23bd4-49c8-4691-ab45-5426e6c3cc6f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mkct2" Jan 31 04:13:06 crc kubenswrapper[4667]: I0131 04:13:06.267004 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c6e23bd4-49c8-4691-ab45-5426e6c3cc6f-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mkct2\" (UID: \"c6e23bd4-49c8-4691-ab45-5426e6c3cc6f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mkct2" Jan 31 04:13:06 crc kubenswrapper[4667]: I0131 04:13:06.370349 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c6e23bd4-49c8-4691-ab45-5426e6c3cc6f-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mkct2\" (UID: \"c6e23bd4-49c8-4691-ab45-5426e6c3cc6f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mkct2" Jan 31 04:13:06 crc kubenswrapper[4667]: I0131 04:13:06.370440 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt77p\" (UniqueName: \"kubernetes.io/projected/c6e23bd4-49c8-4691-ab45-5426e6c3cc6f-kube-api-access-tt77p\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mkct2\" (UID: \"c6e23bd4-49c8-4691-ab45-5426e6c3cc6f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mkct2" Jan 31 04:13:06 crc kubenswrapper[4667]: I0131 04:13:06.370541 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6e23bd4-49c8-4691-ab45-5426e6c3cc6f-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mkct2\" (UID: \"c6e23bd4-49c8-4691-ab45-5426e6c3cc6f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mkct2" Jan 31 04:13:06 crc kubenswrapper[4667]: I0131 04:13:06.377330 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6e23bd4-49c8-4691-ab45-5426e6c3cc6f-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mkct2\" (UID: \"c6e23bd4-49c8-4691-ab45-5426e6c3cc6f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mkct2" Jan 31 04:13:06 crc kubenswrapper[4667]: I0131 04:13:06.378644 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c6e23bd4-49c8-4691-ab45-5426e6c3cc6f-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mkct2\" (UID: \"c6e23bd4-49c8-4691-ab45-5426e6c3cc6f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mkct2" Jan 31 04:13:06 crc kubenswrapper[4667]: I0131 04:13:06.391494 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt77p\" (UniqueName: \"kubernetes.io/projected/c6e23bd4-49c8-4691-ab45-5426e6c3cc6f-kube-api-access-tt77p\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mkct2\" (UID: \"c6e23bd4-49c8-4691-ab45-5426e6c3cc6f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mkct2" Jan 31 04:13:06 crc kubenswrapper[4667]: I0131 04:13:06.481602 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mkct2" Jan 31 04:13:07 crc kubenswrapper[4667]: I0131 04:13:07.097851 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-mkct2"] Jan 31 04:13:08 crc kubenswrapper[4667]: I0131 04:13:08.062133 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mkct2" event={"ID":"c6e23bd4-49c8-4691-ab45-5426e6c3cc6f","Type":"ContainerStarted","Data":"fb199ec46cfd5324e674b1f79819543b70d353a7565739257ee79458f2c824f1"} Jan 31 04:13:08 crc kubenswrapper[4667]: I0131 04:13:08.062691 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mkct2" event={"ID":"c6e23bd4-49c8-4691-ab45-5426e6c3cc6f","Type":"ContainerStarted","Data":"d8de1ccd409f8befac85f36db1890427c8a8499874550bfeaf51ab8788eceb90"} Jan 31 04:13:08 crc kubenswrapper[4667]: I0131 04:13:08.096776 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mkct2" podStartSLOduration=1.569012563 podStartE2EDuration="2.096746255s" podCreationTimestamp="2026-01-31 04:13:06 +0000 UTC" firstStartedPulling="2026-01-31 04:13:07.101745451 +0000 UTC m=+1510.618080750" lastFinishedPulling="2026-01-31 04:13:07.629479103 +0000 UTC m=+1511.145814442" observedRunningTime="2026-01-31 04:13:08.088212009 +0000 UTC m=+1511.604547308" watchObservedRunningTime="2026-01-31 04:13:08.096746255 +0000 UTC m=+1511.613081554" Jan 31 04:13:11 crc kubenswrapper[4667]: I0131 04:13:11.117336 4667 generic.go:334] "Generic (PLEG): container finished" podID="c6e23bd4-49c8-4691-ab45-5426e6c3cc6f" containerID="fb199ec46cfd5324e674b1f79819543b70d353a7565739257ee79458f2c824f1" exitCode=0 Jan 31 04:13:11 crc kubenswrapper[4667]: I0131 04:13:11.117428 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mkct2" event={"ID":"c6e23bd4-49c8-4691-ab45-5426e6c3cc6f","Type":"ContainerDied","Data":"fb199ec46cfd5324e674b1f79819543b70d353a7565739257ee79458f2c824f1"} Jan 31 04:13:12 crc kubenswrapper[4667]: I0131 04:13:12.701435 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mkct2" Jan 31 04:13:12 crc kubenswrapper[4667]: I0131 04:13:12.723584 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tt77p\" (UniqueName: \"kubernetes.io/projected/c6e23bd4-49c8-4691-ab45-5426e6c3cc6f-kube-api-access-tt77p\") pod \"c6e23bd4-49c8-4691-ab45-5426e6c3cc6f\" (UID: \"c6e23bd4-49c8-4691-ab45-5426e6c3cc6f\") " Jan 31 04:13:12 crc kubenswrapper[4667]: I0131 04:13:12.723809 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6e23bd4-49c8-4691-ab45-5426e6c3cc6f-inventory\") pod \"c6e23bd4-49c8-4691-ab45-5426e6c3cc6f\" (UID: \"c6e23bd4-49c8-4691-ab45-5426e6c3cc6f\") " Jan 31 04:13:12 crc kubenswrapper[4667]: I0131 04:13:12.724035 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c6e23bd4-49c8-4691-ab45-5426e6c3cc6f-ssh-key-openstack-edpm-ipam\") pod \"c6e23bd4-49c8-4691-ab45-5426e6c3cc6f\" (UID: \"c6e23bd4-49c8-4691-ab45-5426e6c3cc6f\") " Jan 31 04:13:12 crc kubenswrapper[4667]: I0131 04:13:12.733413 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6e23bd4-49c8-4691-ab45-5426e6c3cc6f-kube-api-access-tt77p" (OuterVolumeSpecName: "kube-api-access-tt77p") pod "c6e23bd4-49c8-4691-ab45-5426e6c3cc6f" (UID: "c6e23bd4-49c8-4691-ab45-5426e6c3cc6f"). InnerVolumeSpecName "kube-api-access-tt77p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:13:12 crc kubenswrapper[4667]: I0131 04:13:12.791929 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6e23bd4-49c8-4691-ab45-5426e6c3cc6f-inventory" (OuterVolumeSpecName: "inventory") pod "c6e23bd4-49c8-4691-ab45-5426e6c3cc6f" (UID: "c6e23bd4-49c8-4691-ab45-5426e6c3cc6f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:13:12 crc kubenswrapper[4667]: I0131 04:13:12.817242 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6e23bd4-49c8-4691-ab45-5426e6c3cc6f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c6e23bd4-49c8-4691-ab45-5426e6c3cc6f" (UID: "c6e23bd4-49c8-4691-ab45-5426e6c3cc6f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:13:12 crc kubenswrapper[4667]: I0131 04:13:12.826985 4667 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c6e23bd4-49c8-4691-ab45-5426e6c3cc6f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:13:12 crc kubenswrapper[4667]: I0131 04:13:12.827144 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tt77p\" (UniqueName: \"kubernetes.io/projected/c6e23bd4-49c8-4691-ab45-5426e6c3cc6f-kube-api-access-tt77p\") on node \"crc\" DevicePath \"\"" Jan 31 04:13:12 crc kubenswrapper[4667]: I0131 04:13:12.827230 4667 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6e23bd4-49c8-4691-ab45-5426e6c3cc6f-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 04:13:13 crc kubenswrapper[4667]: I0131 04:13:13.144713 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mkct2" event={"ID":"c6e23bd4-49c8-4691-ab45-5426e6c3cc6f","Type":"ContainerDied","Data":"d8de1ccd409f8befac85f36db1890427c8a8499874550bfeaf51ab8788eceb90"} Jan 31 04:13:13 crc kubenswrapper[4667]: I0131 04:13:13.144776 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8de1ccd409f8befac85f36db1890427c8a8499874550bfeaf51ab8788eceb90" Jan 31 04:13:13 crc kubenswrapper[4667]: I0131 04:13:13.144881 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mkct2" Jan 31 04:13:13 crc kubenswrapper[4667]: I0131 04:13:13.277526 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r92t5"] Jan 31 04:13:13 crc kubenswrapper[4667]: E0131 04:13:13.278368 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6e23bd4-49c8-4691-ab45-5426e6c3cc6f" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 31 04:13:13 crc kubenswrapper[4667]: I0131 04:13:13.278386 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6e23bd4-49c8-4691-ab45-5426e6c3cc6f" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 31 04:13:13 crc kubenswrapper[4667]: I0131 04:13:13.278592 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6e23bd4-49c8-4691-ab45-5426e6c3cc6f" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 31 04:13:13 crc kubenswrapper[4667]: I0131 04:13:13.281706 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r92t5" Jan 31 04:13:13 crc kubenswrapper[4667]: I0131 04:13:13.286401 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z7p2q" Jan 31 04:13:13 crc kubenswrapper[4667]: I0131 04:13:13.286671 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 04:13:13 crc kubenswrapper[4667]: I0131 04:13:13.286824 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 04:13:13 crc kubenswrapper[4667]: I0131 04:13:13.286955 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 04:13:13 crc kubenswrapper[4667]: I0131 04:13:13.310547 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r92t5"] Jan 31 04:13:13 crc kubenswrapper[4667]: I0131 04:13:13.342982 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj55j\" (UniqueName: \"kubernetes.io/projected/24442823-d584-44f3-bf92-1e3382adb87f-kube-api-access-vj55j\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-r92t5\" (UID: \"24442823-d584-44f3-bf92-1e3382adb87f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r92t5" Jan 31 04:13:13 crc kubenswrapper[4667]: I0131 04:13:13.343124 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24442823-d584-44f3-bf92-1e3382adb87f-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-r92t5\" (UID: \"24442823-d584-44f3-bf92-1e3382adb87f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r92t5" Jan 31 04:13:13 crc kubenswrapper[4667]: I0131 04:13:13.343237 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24442823-d584-44f3-bf92-1e3382adb87f-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-r92t5\" (UID: \"24442823-d584-44f3-bf92-1e3382adb87f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r92t5" Jan 31 04:13:13 crc kubenswrapper[4667]: I0131 04:13:13.343290 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24442823-d584-44f3-bf92-1e3382adb87f-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-r92t5\" (UID: \"24442823-d584-44f3-bf92-1e3382adb87f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r92t5" Jan 31 04:13:13 crc kubenswrapper[4667]: I0131 04:13:13.445969 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24442823-d584-44f3-bf92-1e3382adb87f-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-r92t5\" (UID: \"24442823-d584-44f3-bf92-1e3382adb87f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r92t5" Jan 31 04:13:13 crc kubenswrapper[4667]: I0131 04:13:13.446079 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24442823-d584-44f3-bf92-1e3382adb87f-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-r92t5\" (UID: \"24442823-d584-44f3-bf92-1e3382adb87f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r92t5" Jan 31 04:13:13 crc kubenswrapper[4667]: I0131 04:13:13.446190 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj55j\" (UniqueName: \"kubernetes.io/projected/24442823-d584-44f3-bf92-1e3382adb87f-kube-api-access-vj55j\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-r92t5\" (UID: \"24442823-d584-44f3-bf92-1e3382adb87f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r92t5" Jan 31 04:13:13 crc kubenswrapper[4667]: I0131 04:13:13.446263 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24442823-d584-44f3-bf92-1e3382adb87f-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-r92t5\" (UID: \"24442823-d584-44f3-bf92-1e3382adb87f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r92t5" Jan 31 04:13:13 crc kubenswrapper[4667]: I0131 04:13:13.451830 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24442823-d584-44f3-bf92-1e3382adb87f-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-r92t5\" (UID: \"24442823-d584-44f3-bf92-1e3382adb87f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r92t5" Jan 31 04:13:13 crc kubenswrapper[4667]: I0131 04:13:13.452655 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24442823-d584-44f3-bf92-1e3382adb87f-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-r92t5\" (UID: \"24442823-d584-44f3-bf92-1e3382adb87f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r92t5" Jan 31 04:13:13 crc kubenswrapper[4667]: I0131 04:13:13.453123 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24442823-d584-44f3-bf92-1e3382adb87f-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-r92t5\" (UID: \"24442823-d584-44f3-bf92-1e3382adb87f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r92t5" Jan 31 04:13:13 crc kubenswrapper[4667]: I0131 04:13:13.474747 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj55j\" (UniqueName: \"kubernetes.io/projected/24442823-d584-44f3-bf92-1e3382adb87f-kube-api-access-vj55j\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-r92t5\" (UID: \"24442823-d584-44f3-bf92-1e3382adb87f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r92t5" Jan 31 04:13:13 crc kubenswrapper[4667]: I0131 04:13:13.608470 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r92t5" Jan 31 04:13:14 crc kubenswrapper[4667]: W0131 04:13:14.407595 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24442823_d584_44f3_bf92_1e3382adb87f.slice/crio-dd7666c17c4e4d59c2e3a3590700d866002f59c3492636358633b75ed6857941 WatchSource:0}: Error finding container dd7666c17c4e4d59c2e3a3590700d866002f59c3492636358633b75ed6857941: Status 404 returned error can't find the container with id dd7666c17c4e4d59c2e3a3590700d866002f59c3492636358633b75ed6857941 Jan 31 04:13:14 crc kubenswrapper[4667]: I0131 04:13:14.434615 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r92t5"] Jan 31 04:13:15 crc kubenswrapper[4667]: I0131 04:13:15.179285 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r92t5" event={"ID":"24442823-d584-44f3-bf92-1e3382adb87f","Type":"ContainerStarted","Data":"4283f36ba209c17e7b4b790b8eff93244c1a6528122e1453a1d867948bf6086a"} Jan 31 04:13:15 crc kubenswrapper[4667]: I0131 04:13:15.179890 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r92t5" event={"ID":"24442823-d584-44f3-bf92-1e3382adb87f","Type":"ContainerStarted","Data":"dd7666c17c4e4d59c2e3a3590700d866002f59c3492636358633b75ed6857941"} Jan 31 04:13:15 crc kubenswrapper[4667]: I0131 04:13:15.203245 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r92t5" podStartSLOduration=1.792308762 podStartE2EDuration="2.203217961s" podCreationTimestamp="2026-01-31 04:13:13 +0000 UTC" firstStartedPulling="2026-01-31 04:13:14.412762302 +0000 UTC m=+1517.929097601" lastFinishedPulling="2026-01-31 04:13:14.823671481 +0000 UTC m=+1518.340006800" observedRunningTime="2026-01-31 04:13:15.202573384 +0000 UTC m=+1518.718908673" watchObservedRunningTime="2026-01-31 04:13:15.203217961 +0000 UTC m=+1518.719553270" Jan 31 04:13:22 crc kubenswrapper[4667]: I0131 04:13:22.709553 4667 scope.go:117] "RemoveContainer" containerID="f85a64badc06a14e431e6a5acecea616ab84f8d1c01b2bf0c7d62704c4ccfa84" Jan 31 04:13:22 crc kubenswrapper[4667]: I0131 04:13:22.751120 4667 scope.go:117] "RemoveContainer" containerID="08193dbd4c1e145245c6f8205aa1840ad42db04eb19e0fcf4fd16878230e64b9" Jan 31 04:13:54 crc kubenswrapper[4667]: I0131 04:13:54.248585 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hj84f"] Jan 31 04:13:54 crc kubenswrapper[4667]: I0131 04:13:54.253032 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hj84f" Jan 31 04:13:54 crc kubenswrapper[4667]: I0131 04:13:54.272728 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f530d6cb-612c-43a8-bf77-eebb26b545fa-utilities\") pod \"certified-operators-hj84f\" (UID: \"f530d6cb-612c-43a8-bf77-eebb26b545fa\") " pod="openshift-marketplace/certified-operators-hj84f" Jan 31 04:13:54 crc kubenswrapper[4667]: I0131 04:13:54.273507 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srhl9\" (UniqueName: \"kubernetes.io/projected/f530d6cb-612c-43a8-bf77-eebb26b545fa-kube-api-access-srhl9\") pod \"certified-operators-hj84f\" (UID: \"f530d6cb-612c-43a8-bf77-eebb26b545fa\") " pod="openshift-marketplace/certified-operators-hj84f" Jan 31 04:13:54 crc kubenswrapper[4667]: I0131 04:13:54.273565 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f530d6cb-612c-43a8-bf77-eebb26b545fa-catalog-content\") pod \"certified-operators-hj84f\" (UID: \"f530d6cb-612c-43a8-bf77-eebb26b545fa\") " pod="openshift-marketplace/certified-operators-hj84f" Jan 31 04:13:54 crc kubenswrapper[4667]: I0131 04:13:54.354762 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hj84f"] Jan 31 04:13:54 crc kubenswrapper[4667]: I0131 04:13:54.381338 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f530d6cb-612c-43a8-bf77-eebb26b545fa-utilities\") pod \"certified-operators-hj84f\" (UID: \"f530d6cb-612c-43a8-bf77-eebb26b545fa\") " pod="openshift-marketplace/certified-operators-hj84f" Jan 31 04:13:54 crc kubenswrapper[4667]: I0131 04:13:54.381410 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srhl9\" (UniqueName: \"kubernetes.io/projected/f530d6cb-612c-43a8-bf77-eebb26b545fa-kube-api-access-srhl9\") pod \"certified-operators-hj84f\" (UID: \"f530d6cb-612c-43a8-bf77-eebb26b545fa\") " pod="openshift-marketplace/certified-operators-hj84f" Jan 31 04:13:54 crc kubenswrapper[4667]: I0131 04:13:54.381440 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f530d6cb-612c-43a8-bf77-eebb26b545fa-catalog-content\") pod \"certified-operators-hj84f\" (UID: \"f530d6cb-612c-43a8-bf77-eebb26b545fa\") " pod="openshift-marketplace/certified-operators-hj84f" Jan 31 04:13:54 crc kubenswrapper[4667]: I0131 04:13:54.382256 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f530d6cb-612c-43a8-bf77-eebb26b545fa-utilities\") pod \"certified-operators-hj84f\" (UID: \"f530d6cb-612c-43a8-bf77-eebb26b545fa\") " pod="openshift-marketplace/certified-operators-hj84f" Jan 31 04:13:54 crc kubenswrapper[4667]: I0131 04:13:54.382380 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f530d6cb-612c-43a8-bf77-eebb26b545fa-catalog-content\") pod \"certified-operators-hj84f\" (UID: \"f530d6cb-612c-43a8-bf77-eebb26b545fa\") " pod="openshift-marketplace/certified-operators-hj84f" Jan 31 04:13:54 crc kubenswrapper[4667]: I0131 04:13:54.404234 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srhl9\" (UniqueName: \"kubernetes.io/projected/f530d6cb-612c-43a8-bf77-eebb26b545fa-kube-api-access-srhl9\") pod \"certified-operators-hj84f\" (UID: \"f530d6cb-612c-43a8-bf77-eebb26b545fa\") " pod="openshift-marketplace/certified-operators-hj84f" Jan 31 04:13:54 crc kubenswrapper[4667]: I0131 04:13:54.587369 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hj84f" Jan 31 04:13:55 crc kubenswrapper[4667]: I0131 04:13:55.257318 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hj84f"] Jan 31 04:13:55 crc kubenswrapper[4667]: I0131 04:13:55.697099 4667 generic.go:334] "Generic (PLEG): container finished" podID="f530d6cb-612c-43a8-bf77-eebb26b545fa" containerID="1132e1ecf5c2f3cfdcf368da51bf6aa912f1ba50717a0eb26216f93a17b49f67" exitCode=0 Jan 31 04:13:55 crc kubenswrapper[4667]: I0131 04:13:55.697289 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hj84f" event={"ID":"f530d6cb-612c-43a8-bf77-eebb26b545fa","Type":"ContainerDied","Data":"1132e1ecf5c2f3cfdcf368da51bf6aa912f1ba50717a0eb26216f93a17b49f67"} Jan 31 04:13:55 crc kubenswrapper[4667]: I0131 04:13:55.697925 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hj84f" event={"ID":"f530d6cb-612c-43a8-bf77-eebb26b545fa","Type":"ContainerStarted","Data":"04f4b2afa0a05e3fb011b80b9bde45f0317a755895ac8af5736784cf4725940d"} Jan 31 04:13:57 crc kubenswrapper[4667]: I0131 04:13:57.834352 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hj84f" event={"ID":"f530d6cb-612c-43a8-bf77-eebb26b545fa","Type":"ContainerStarted","Data":"4feb71adbd4e36652722318f7da6a030c9e7a9769f25df9cfb98c80d9d5fbcfa"} Jan 31 04:13:59 crc kubenswrapper[4667]: I0131 04:13:59.870725 4667 generic.go:334] "Generic (PLEG): container finished" podID="f530d6cb-612c-43a8-bf77-eebb26b545fa" containerID="4feb71adbd4e36652722318f7da6a030c9e7a9769f25df9cfb98c80d9d5fbcfa" exitCode=0 Jan 31 04:13:59 crc kubenswrapper[4667]: I0131 04:13:59.870871 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hj84f" event={"ID":"f530d6cb-612c-43a8-bf77-eebb26b545fa","Type":"ContainerDied","Data":"4feb71adbd4e36652722318f7da6a030c9e7a9769f25df9cfb98c80d9d5fbcfa"} Jan 31 04:14:00 crc kubenswrapper[4667]: I0131 04:14:00.897343 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hj84f" event={"ID":"f530d6cb-612c-43a8-bf77-eebb26b545fa","Type":"ContainerStarted","Data":"a51f6a5a76c9458e9c24cdd7a5d94da2fe7338b8b948a02e0f7e70d5286435ae"} Jan 31 04:14:00 crc kubenswrapper[4667]: I0131 04:14:00.930618 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hj84f" podStartSLOduration=2.052467444 podStartE2EDuration="6.930563927s" podCreationTimestamp="2026-01-31 04:13:54 +0000 UTC" firstStartedPulling="2026-01-31 04:13:55.699864358 +0000 UTC m=+1559.216199657" lastFinishedPulling="2026-01-31 04:14:00.577960841 +0000 UTC m=+1564.094296140" observedRunningTime="2026-01-31 04:14:00.919883534 +0000 UTC m=+1564.436218913" watchObservedRunningTime="2026-01-31 04:14:00.930563927 +0000 UTC m=+1564.446899266" Jan 31 04:14:04 crc kubenswrapper[4667]: I0131 04:14:04.588054 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hj84f" Jan 31 04:14:04 crc kubenswrapper[4667]: I0131 04:14:04.589137 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hj84f" Jan 31 04:14:04 crc kubenswrapper[4667]: I0131 04:14:04.650444 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hj84f" Jan 31 04:14:14 crc kubenswrapper[4667]: I0131 04:14:14.675939 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hj84f" Jan 31 04:14:14 crc kubenswrapper[4667]: I0131 04:14:14.750542 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hj84f"] Jan 31 04:14:15 crc kubenswrapper[4667]: I0131 04:14:15.061898 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hj84f" podUID="f530d6cb-612c-43a8-bf77-eebb26b545fa" containerName="registry-server" containerID="cri-o://a51f6a5a76c9458e9c24cdd7a5d94da2fe7338b8b948a02e0f7e70d5286435ae" gracePeriod=2 Jan 31 04:14:15 crc kubenswrapper[4667]: I0131 04:14:15.679674 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hj84f" Jan 31 04:14:15 crc kubenswrapper[4667]: I0131 04:14:15.797887 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f530d6cb-612c-43a8-bf77-eebb26b545fa-utilities\") pod \"f530d6cb-612c-43a8-bf77-eebb26b545fa\" (UID: \"f530d6cb-612c-43a8-bf77-eebb26b545fa\") " Jan 31 04:14:15 crc kubenswrapper[4667]: I0131 04:14:15.798026 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srhl9\" (UniqueName: \"kubernetes.io/projected/f530d6cb-612c-43a8-bf77-eebb26b545fa-kube-api-access-srhl9\") pod \"f530d6cb-612c-43a8-bf77-eebb26b545fa\" (UID: \"f530d6cb-612c-43a8-bf77-eebb26b545fa\") " Jan 31 04:14:15 crc kubenswrapper[4667]: I0131 04:14:15.798127 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f530d6cb-612c-43a8-bf77-eebb26b545fa-catalog-content\") pod \"f530d6cb-612c-43a8-bf77-eebb26b545fa\" (UID: \"f530d6cb-612c-43a8-bf77-eebb26b545fa\") " Jan 31 04:14:15 crc kubenswrapper[4667]: I0131 04:14:15.798732 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f530d6cb-612c-43a8-bf77-eebb26b545fa-utilities" (OuterVolumeSpecName: "utilities") pod "f530d6cb-612c-43a8-bf77-eebb26b545fa" (UID: "f530d6cb-612c-43a8-bf77-eebb26b545fa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:14:15 crc kubenswrapper[4667]: I0131 04:14:15.798882 4667 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f530d6cb-612c-43a8-bf77-eebb26b545fa-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:14:15 crc kubenswrapper[4667]: I0131 04:14:15.809965 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f530d6cb-612c-43a8-bf77-eebb26b545fa-kube-api-access-srhl9" (OuterVolumeSpecName: "kube-api-access-srhl9") pod "f530d6cb-612c-43a8-bf77-eebb26b545fa" (UID: "f530d6cb-612c-43a8-bf77-eebb26b545fa"). InnerVolumeSpecName "kube-api-access-srhl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:14:15 crc kubenswrapper[4667]: I0131 04:14:15.861365 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f530d6cb-612c-43a8-bf77-eebb26b545fa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f530d6cb-612c-43a8-bf77-eebb26b545fa" (UID: "f530d6cb-612c-43a8-bf77-eebb26b545fa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:14:15 crc kubenswrapper[4667]: I0131 04:14:15.901940 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srhl9\" (UniqueName: \"kubernetes.io/projected/f530d6cb-612c-43a8-bf77-eebb26b545fa-kube-api-access-srhl9\") on node \"crc\" DevicePath \"\"" Jan 31 04:14:15 crc kubenswrapper[4667]: I0131 04:14:15.902311 4667 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f530d6cb-612c-43a8-bf77-eebb26b545fa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:14:16 crc kubenswrapper[4667]: I0131 04:14:16.079172 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hj84f" Jan 31 04:14:16 crc kubenswrapper[4667]: I0131 04:14:16.079176 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hj84f" event={"ID":"f530d6cb-612c-43a8-bf77-eebb26b545fa","Type":"ContainerDied","Data":"a51f6a5a76c9458e9c24cdd7a5d94da2fe7338b8b948a02e0f7e70d5286435ae"} Jan 31 04:14:16 crc kubenswrapper[4667]: I0131 04:14:16.078997 4667 generic.go:334] "Generic (PLEG): container finished" podID="f530d6cb-612c-43a8-bf77-eebb26b545fa" containerID="a51f6a5a76c9458e9c24cdd7a5d94da2fe7338b8b948a02e0f7e70d5286435ae" exitCode=0 Jan 31 04:14:16 crc kubenswrapper[4667]: I0131 04:14:16.079479 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hj84f" event={"ID":"f530d6cb-612c-43a8-bf77-eebb26b545fa","Type":"ContainerDied","Data":"04f4b2afa0a05e3fb011b80b9bde45f0317a755895ac8af5736784cf4725940d"} Jan 31 04:14:16 crc kubenswrapper[4667]: I0131 04:14:16.079551 4667 scope.go:117] "RemoveContainer" containerID="a51f6a5a76c9458e9c24cdd7a5d94da2fe7338b8b948a02e0f7e70d5286435ae" Jan 31 04:14:16 crc kubenswrapper[4667]: I0131 04:14:16.120467 4667 scope.go:117] "RemoveContainer" containerID="4feb71adbd4e36652722318f7da6a030c9e7a9769f25df9cfb98c80d9d5fbcfa" Jan 31 04:14:16 crc kubenswrapper[4667]: I0131 04:14:16.162763 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hj84f"] Jan 31 04:14:16 crc kubenswrapper[4667]: I0131 04:14:16.179554 4667 scope.go:117] "RemoveContainer" containerID="1132e1ecf5c2f3cfdcf368da51bf6aa912f1ba50717a0eb26216f93a17b49f67" Jan 31 04:14:16 crc kubenswrapper[4667]: I0131 04:14:16.196162 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hj84f"] Jan 31 04:14:16 crc kubenswrapper[4667]: I0131 04:14:16.220608 4667 scope.go:117] "RemoveContainer" containerID="a51f6a5a76c9458e9c24cdd7a5d94da2fe7338b8b948a02e0f7e70d5286435ae" Jan 31 04:14:16 crc kubenswrapper[4667]: E0131 04:14:16.221371 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a51f6a5a76c9458e9c24cdd7a5d94da2fe7338b8b948a02e0f7e70d5286435ae\": container with ID starting with a51f6a5a76c9458e9c24cdd7a5d94da2fe7338b8b948a02e0f7e70d5286435ae not found: ID does not exist" containerID="a51f6a5a76c9458e9c24cdd7a5d94da2fe7338b8b948a02e0f7e70d5286435ae" Jan 31 04:14:16 crc kubenswrapper[4667]: I0131 04:14:16.221408 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a51f6a5a76c9458e9c24cdd7a5d94da2fe7338b8b948a02e0f7e70d5286435ae"} err="failed to get container status \"a51f6a5a76c9458e9c24cdd7a5d94da2fe7338b8b948a02e0f7e70d5286435ae\": rpc error: code = NotFound desc = could not find container \"a51f6a5a76c9458e9c24cdd7a5d94da2fe7338b8b948a02e0f7e70d5286435ae\": container with ID starting with a51f6a5a76c9458e9c24cdd7a5d94da2fe7338b8b948a02e0f7e70d5286435ae not found: ID does not exist" Jan 31 04:14:16 crc kubenswrapper[4667]: I0131 04:14:16.221433 4667 scope.go:117] "RemoveContainer" containerID="4feb71adbd4e36652722318f7da6a030c9e7a9769f25df9cfb98c80d9d5fbcfa" Jan 31 04:14:16 crc kubenswrapper[4667]: E0131 04:14:16.221889 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4feb71adbd4e36652722318f7da6a030c9e7a9769f25df9cfb98c80d9d5fbcfa\": container with ID starting with 4feb71adbd4e36652722318f7da6a030c9e7a9769f25df9cfb98c80d9d5fbcfa not found: ID does not exist" containerID="4feb71adbd4e36652722318f7da6a030c9e7a9769f25df9cfb98c80d9d5fbcfa" Jan 31 04:14:16 crc kubenswrapper[4667]: I0131 04:14:16.221961 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4feb71adbd4e36652722318f7da6a030c9e7a9769f25df9cfb98c80d9d5fbcfa"} err="failed to get container status \"4feb71adbd4e36652722318f7da6a030c9e7a9769f25df9cfb98c80d9d5fbcfa\": rpc error: code = NotFound desc = could not find container \"4feb71adbd4e36652722318f7da6a030c9e7a9769f25df9cfb98c80d9d5fbcfa\": container with ID starting with 4feb71adbd4e36652722318f7da6a030c9e7a9769f25df9cfb98c80d9d5fbcfa not found: ID does not exist" Jan 31 04:14:16 crc kubenswrapper[4667]: I0131 04:14:16.222043 4667 scope.go:117] "RemoveContainer" containerID="1132e1ecf5c2f3cfdcf368da51bf6aa912f1ba50717a0eb26216f93a17b49f67" Jan 31 04:14:16 crc kubenswrapper[4667]: E0131 04:14:16.222673 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1132e1ecf5c2f3cfdcf368da51bf6aa912f1ba50717a0eb26216f93a17b49f67\": container with ID starting with 1132e1ecf5c2f3cfdcf368da51bf6aa912f1ba50717a0eb26216f93a17b49f67 not found: ID does not exist" containerID="1132e1ecf5c2f3cfdcf368da51bf6aa912f1ba50717a0eb26216f93a17b49f67" Jan 31 04:14:16 crc kubenswrapper[4667]: I0131 04:14:16.222713 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1132e1ecf5c2f3cfdcf368da51bf6aa912f1ba50717a0eb26216f93a17b49f67"} err="failed to get container status \"1132e1ecf5c2f3cfdcf368da51bf6aa912f1ba50717a0eb26216f93a17b49f67\": rpc error: code = NotFound desc = could not find container \"1132e1ecf5c2f3cfdcf368da51bf6aa912f1ba50717a0eb26216f93a17b49f67\": container with ID starting with 1132e1ecf5c2f3cfdcf368da51bf6aa912f1ba50717a0eb26216f93a17b49f67 not found: ID does not exist" Jan 31 04:14:17 crc kubenswrapper[4667]: I0131 04:14:17.302247 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f530d6cb-612c-43a8-bf77-eebb26b545fa" path="/var/lib/kubelet/pods/f530d6cb-612c-43a8-bf77-eebb26b545fa/volumes" Jan 31 04:14:20 crc kubenswrapper[4667]: I0131 04:14:20.199545 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nz57c"] Jan 31 04:14:20 crc kubenswrapper[4667]: E0131 04:14:20.201015 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f530d6cb-612c-43a8-bf77-eebb26b545fa" containerName="extract-utilities" Jan 31 04:14:20 crc kubenswrapper[4667]: I0131 04:14:20.201042 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="f530d6cb-612c-43a8-bf77-eebb26b545fa" containerName="extract-utilities" Jan 31 04:14:20 crc kubenswrapper[4667]: E0131 04:14:20.201068 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f530d6cb-612c-43a8-bf77-eebb26b545fa" containerName="registry-server" Jan 31 04:14:20 crc kubenswrapper[4667]: I0131 04:14:20.201083 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="f530d6cb-612c-43a8-bf77-eebb26b545fa" containerName="registry-server" Jan 31 04:14:20 crc kubenswrapper[4667]: E0131 04:14:20.201115 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f530d6cb-612c-43a8-bf77-eebb26b545fa" containerName="extract-content" Jan 31 04:14:20 crc kubenswrapper[4667]: I0131 04:14:20.201127 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="f530d6cb-612c-43a8-bf77-eebb26b545fa" containerName="extract-content" Jan 31 04:14:20 crc kubenswrapper[4667]: I0131 04:14:20.201523 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="f530d6cb-612c-43a8-bf77-eebb26b545fa" containerName="registry-server" Jan 31 04:14:20 crc kubenswrapper[4667]: I0131 04:14:20.203729 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nz57c" Jan 31 04:14:20 crc kubenswrapper[4667]: I0131 04:14:20.238648 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nz57c"] Jan 31 04:14:20 crc kubenswrapper[4667]: I0131 04:14:20.306688 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e7822fc-744a-4507-b357-7d46bc595bce-utilities\") pod \"redhat-marketplace-nz57c\" (UID: \"2e7822fc-744a-4507-b357-7d46bc595bce\") " pod="openshift-marketplace/redhat-marketplace-nz57c" Jan 31 04:14:20 crc kubenswrapper[4667]: I0131 04:14:20.306792 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbhmf\" (UniqueName: \"kubernetes.io/projected/2e7822fc-744a-4507-b357-7d46bc595bce-kube-api-access-sbhmf\") pod \"redhat-marketplace-nz57c\" (UID: \"2e7822fc-744a-4507-b357-7d46bc595bce\") " pod="openshift-marketplace/redhat-marketplace-nz57c" Jan 31 04:14:20 crc kubenswrapper[4667]: I0131 04:14:20.306985 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e7822fc-744a-4507-b357-7d46bc595bce-catalog-content\") pod \"redhat-marketplace-nz57c\" (UID: \"2e7822fc-744a-4507-b357-7d46bc595bce\") " pod="openshift-marketplace/redhat-marketplace-nz57c" Jan 31 04:14:20 crc kubenswrapper[4667]: I0131 04:14:20.409678 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e7822fc-744a-4507-b357-7d46bc595bce-utilities\") pod \"redhat-marketplace-nz57c\" (UID: \"2e7822fc-744a-4507-b357-7d46bc595bce\") " pod="openshift-marketplace/redhat-marketplace-nz57c" Jan 31 04:14:20 crc kubenswrapper[4667]: I0131 04:14:20.409762 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbhmf\" (UniqueName: \"kubernetes.io/projected/2e7822fc-744a-4507-b357-7d46bc595bce-kube-api-access-sbhmf\") pod \"redhat-marketplace-nz57c\" (UID: \"2e7822fc-744a-4507-b357-7d46bc595bce\") " pod="openshift-marketplace/redhat-marketplace-nz57c" Jan 31 04:14:20 crc kubenswrapper[4667]: I0131 04:14:20.409910 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e7822fc-744a-4507-b357-7d46bc595bce-catalog-content\") pod \"redhat-marketplace-nz57c\" (UID: \"2e7822fc-744a-4507-b357-7d46bc595bce\") " pod="openshift-marketplace/redhat-marketplace-nz57c" Jan 31 04:14:20 crc kubenswrapper[4667]: I0131 04:14:20.410367 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e7822fc-744a-4507-b357-7d46bc595bce-utilities\") pod \"redhat-marketplace-nz57c\" (UID: \"2e7822fc-744a-4507-b357-7d46bc595bce\") " pod="openshift-marketplace/redhat-marketplace-nz57c" Jan 31 04:14:20 crc kubenswrapper[4667]: I0131 04:14:20.410465 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e7822fc-744a-4507-b357-7d46bc595bce-catalog-content\") pod \"redhat-marketplace-nz57c\" (UID: \"2e7822fc-744a-4507-b357-7d46bc595bce\") " pod="openshift-marketplace/redhat-marketplace-nz57c" Jan 31 04:14:20 crc kubenswrapper[4667]: I0131 04:14:20.434659 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbhmf\" (UniqueName: \"kubernetes.io/projected/2e7822fc-744a-4507-b357-7d46bc595bce-kube-api-access-sbhmf\") pod \"redhat-marketplace-nz57c\" (UID: \"2e7822fc-744a-4507-b357-7d46bc595bce\") " pod="openshift-marketplace/redhat-marketplace-nz57c" Jan 31 04:14:20 crc kubenswrapper[4667]: I0131 04:14:20.536801 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nz57c" Jan 31 04:14:21 crc kubenswrapper[4667]: I0131 04:14:21.063929 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nz57c"] Jan 31 04:14:21 crc kubenswrapper[4667]: I0131 04:14:21.181397 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nz57c" event={"ID":"2e7822fc-744a-4507-b357-7d46bc595bce","Type":"ContainerStarted","Data":"695b621caa4b8ba80eaff1febcbc0a36f3f07f55b8d1e836d399fde8640a7508"} Jan 31 04:14:22 crc kubenswrapper[4667]: I0131 04:14:22.192909 4667 generic.go:334] "Generic (PLEG): container finished" podID="2e7822fc-744a-4507-b357-7d46bc595bce" containerID="65eecadfd81755185f37f3409e2caa7705cc0f431cf3a6a2c8d7393bb2f59317" exitCode=0 Jan 31 04:14:22 crc kubenswrapper[4667]: I0131 04:14:22.193029 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nz57c" event={"ID":"2e7822fc-744a-4507-b357-7d46bc595bce","Type":"ContainerDied","Data":"65eecadfd81755185f37f3409e2caa7705cc0f431cf3a6a2c8d7393bb2f59317"} Jan 31 04:14:22 crc kubenswrapper[4667]: I0131 04:14:22.926464 4667 scope.go:117] "RemoveContainer" containerID="04031cafab9c8ee2081c1d44fa5555b5c1ede2c62553aa59a7e3863f5e2cb39e" Jan 31 04:14:22 crc kubenswrapper[4667]: I0131 04:14:22.996994 4667 scope.go:117] "RemoveContainer" containerID="d945eb277af255971ce21f9fbe29ebc3a76c0875c97b71da5a95149ba1c61844" Jan 31 04:14:23 crc kubenswrapper[4667]: I0131 04:14:23.022291 4667 scope.go:117] "RemoveContainer" containerID="4744d93f757062e772e2bad13de89f14714f329c6557f80834045603808a0be2" Jan 31 04:14:23 crc kubenswrapper[4667]: I0131 04:14:23.204446 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nz57c" event={"ID":"2e7822fc-744a-4507-b357-7d46bc595bce","Type":"ContainerStarted","Data":"6e06a16de4408700b7fd3b590fdbb2c743f4c853919c203f58320b47c1781bf5"} Jan 31 04:14:24 crc kubenswrapper[4667]: I0131 04:14:24.224326 4667 generic.go:334] "Generic (PLEG): container finished" podID="2e7822fc-744a-4507-b357-7d46bc595bce" containerID="6e06a16de4408700b7fd3b590fdbb2c743f4c853919c203f58320b47c1781bf5" exitCode=0 Jan 31 04:14:24 crc kubenswrapper[4667]: I0131 04:14:24.224425 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nz57c" event={"ID":"2e7822fc-744a-4507-b357-7d46bc595bce","Type":"ContainerDied","Data":"6e06a16de4408700b7fd3b590fdbb2c743f4c853919c203f58320b47c1781bf5"} Jan 31 04:14:26 crc kubenswrapper[4667]: I0131 04:14:26.276295 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nz57c" event={"ID":"2e7822fc-744a-4507-b357-7d46bc595bce","Type":"ContainerStarted","Data":"eea542dc5930e674436c9f50c8d33ae06a9ee4b8d88c7474336811f0b6cfcc03"} Jan 31 04:14:30 crc kubenswrapper[4667]: I0131 04:14:30.537312 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nz57c" Jan 31 04:14:30 crc kubenswrapper[4667]: I0131 04:14:30.538467 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nz57c" Jan 31 04:14:30 crc kubenswrapper[4667]: I0131 04:14:30.610186 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nz57c" Jan 31 04:14:30 crc kubenswrapper[4667]: I0131 04:14:30.635565 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nz57c" podStartSLOduration=7.700369193 podStartE2EDuration="10.63554271s" podCreationTimestamp="2026-01-31 04:14:20 +0000 UTC" firstStartedPulling="2026-01-31 04:14:22.195268208 +0000 UTC m=+1585.711603517" lastFinishedPulling="2026-01-31 04:14:25.130441705 +0000 UTC m=+1588.646777034" observedRunningTime="2026-01-31 04:14:26.309422192 +0000 UTC m=+1589.825757481" watchObservedRunningTime="2026-01-31 04:14:30.63554271 +0000 UTC m=+1594.151877999" Jan 31 04:14:31 crc kubenswrapper[4667]: I0131 04:14:31.415727 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nz57c" Jan 31 04:14:31 crc kubenswrapper[4667]: I0131 04:14:31.488776 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nz57c"] Jan 31 04:14:33 crc kubenswrapper[4667]: I0131 04:14:33.366206 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nz57c" podUID="2e7822fc-744a-4507-b357-7d46bc595bce" containerName="registry-server" containerID="cri-o://eea542dc5930e674436c9f50c8d33ae06a9ee4b8d88c7474336811f0b6cfcc03" gracePeriod=2 Jan 31 04:14:33 crc kubenswrapper[4667]: I0131 04:14:33.971166 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nz57c" Jan 31 04:14:34 crc kubenswrapper[4667]: I0131 04:14:34.080938 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e7822fc-744a-4507-b357-7d46bc595bce-catalog-content\") pod \"2e7822fc-744a-4507-b357-7d46bc595bce\" (UID: \"2e7822fc-744a-4507-b357-7d46bc595bce\") " Jan 31 04:14:34 crc kubenswrapper[4667]: I0131 04:14:34.081060 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbhmf\" (UniqueName: \"kubernetes.io/projected/2e7822fc-744a-4507-b357-7d46bc595bce-kube-api-access-sbhmf\") pod \"2e7822fc-744a-4507-b357-7d46bc595bce\" (UID: \"2e7822fc-744a-4507-b357-7d46bc595bce\") " Jan 31 04:14:34 crc kubenswrapper[4667]: I0131 04:14:34.081274 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e7822fc-744a-4507-b357-7d46bc595bce-utilities\") pod \"2e7822fc-744a-4507-b357-7d46bc595bce\" (UID: \"2e7822fc-744a-4507-b357-7d46bc595bce\") " Jan 31 04:14:34 crc kubenswrapper[4667]: I0131 04:14:34.082225 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e7822fc-744a-4507-b357-7d46bc595bce-utilities" (OuterVolumeSpecName: "utilities") pod "2e7822fc-744a-4507-b357-7d46bc595bce" (UID: "2e7822fc-744a-4507-b357-7d46bc595bce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:14:34 crc kubenswrapper[4667]: I0131 04:14:34.090313 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e7822fc-744a-4507-b357-7d46bc595bce-kube-api-access-sbhmf" (OuterVolumeSpecName: "kube-api-access-sbhmf") pod "2e7822fc-744a-4507-b357-7d46bc595bce" (UID: "2e7822fc-744a-4507-b357-7d46bc595bce"). InnerVolumeSpecName "kube-api-access-sbhmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:14:34 crc kubenswrapper[4667]: I0131 04:14:34.111660 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e7822fc-744a-4507-b357-7d46bc595bce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2e7822fc-744a-4507-b357-7d46bc595bce" (UID: "2e7822fc-744a-4507-b357-7d46bc595bce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:14:34 crc kubenswrapper[4667]: I0131 04:14:34.184585 4667 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e7822fc-744a-4507-b357-7d46bc595bce-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:14:34 crc kubenswrapper[4667]: I0131 04:14:34.185253 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbhmf\" (UniqueName: \"kubernetes.io/projected/2e7822fc-744a-4507-b357-7d46bc595bce-kube-api-access-sbhmf\") on node \"crc\" DevicePath \"\"" Jan 31 04:14:34 crc kubenswrapper[4667]: I0131 04:14:34.185324 4667 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e7822fc-744a-4507-b357-7d46bc595bce-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:14:34 crc kubenswrapper[4667]: I0131 04:14:34.379318 4667 generic.go:334] "Generic (PLEG): container finished" podID="2e7822fc-744a-4507-b357-7d46bc595bce" containerID="eea542dc5930e674436c9f50c8d33ae06a9ee4b8d88c7474336811f0b6cfcc03" exitCode=0 Jan 31 04:14:34 crc kubenswrapper[4667]: I0131 04:14:34.379387 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nz57c" Jan 31 04:14:34 crc kubenswrapper[4667]: I0131 04:14:34.379388 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nz57c" event={"ID":"2e7822fc-744a-4507-b357-7d46bc595bce","Type":"ContainerDied","Data":"eea542dc5930e674436c9f50c8d33ae06a9ee4b8d88c7474336811f0b6cfcc03"} Jan 31 04:14:34 crc kubenswrapper[4667]: I0131 04:14:34.379475 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nz57c" event={"ID":"2e7822fc-744a-4507-b357-7d46bc595bce","Type":"ContainerDied","Data":"695b621caa4b8ba80eaff1febcbc0a36f3f07f55b8d1e836d399fde8640a7508"} Jan 31 04:14:34 crc kubenswrapper[4667]: I0131 04:14:34.379510 4667 scope.go:117] "RemoveContainer" containerID="eea542dc5930e674436c9f50c8d33ae06a9ee4b8d88c7474336811f0b6cfcc03" Jan 31 04:14:34 crc kubenswrapper[4667]: I0131 04:14:34.405246 4667 scope.go:117] "RemoveContainer" containerID="6e06a16de4408700b7fd3b590fdbb2c743f4c853919c203f58320b47c1781bf5" Jan 31 04:14:34 crc kubenswrapper[4667]: I0131 04:14:34.436498 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nz57c"] Jan 31 04:14:34 crc kubenswrapper[4667]: I0131 04:14:34.450917 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nz57c"] Jan 31 04:14:34 crc kubenswrapper[4667]: I0131 04:14:34.458216 4667 scope.go:117] "RemoveContainer" containerID="65eecadfd81755185f37f3409e2caa7705cc0f431cf3a6a2c8d7393bb2f59317" Jan 31 04:14:34 crc kubenswrapper[4667]: I0131 04:14:34.512055 4667 scope.go:117] "RemoveContainer" containerID="eea542dc5930e674436c9f50c8d33ae06a9ee4b8d88c7474336811f0b6cfcc03" Jan 31 04:14:34 crc kubenswrapper[4667]: E0131 04:14:34.512760 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eea542dc5930e674436c9f50c8d33ae06a9ee4b8d88c7474336811f0b6cfcc03\": container with ID starting with eea542dc5930e674436c9f50c8d33ae06a9ee4b8d88c7474336811f0b6cfcc03 not found: ID does not exist" containerID="eea542dc5930e674436c9f50c8d33ae06a9ee4b8d88c7474336811f0b6cfcc03" Jan 31 04:14:34 crc kubenswrapper[4667]: I0131 04:14:34.512830 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eea542dc5930e674436c9f50c8d33ae06a9ee4b8d88c7474336811f0b6cfcc03"} err="failed to get container status \"eea542dc5930e674436c9f50c8d33ae06a9ee4b8d88c7474336811f0b6cfcc03\": rpc error: code = NotFound desc = could not find container \"eea542dc5930e674436c9f50c8d33ae06a9ee4b8d88c7474336811f0b6cfcc03\": container with ID starting with eea542dc5930e674436c9f50c8d33ae06a9ee4b8d88c7474336811f0b6cfcc03 not found: ID does not exist" Jan 31 04:14:34 crc kubenswrapper[4667]: I0131 04:14:34.512906 4667 scope.go:117] "RemoveContainer" containerID="6e06a16de4408700b7fd3b590fdbb2c743f4c853919c203f58320b47c1781bf5" Jan 31 04:14:34 crc kubenswrapper[4667]: E0131 04:14:34.513545 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e06a16de4408700b7fd3b590fdbb2c743f4c853919c203f58320b47c1781bf5\": container with ID starting with 6e06a16de4408700b7fd3b590fdbb2c743f4c853919c203f58320b47c1781bf5 not found: ID does not exist" containerID="6e06a16de4408700b7fd3b590fdbb2c743f4c853919c203f58320b47c1781bf5" Jan 31 04:14:34 crc kubenswrapper[4667]: I0131 04:14:34.513592 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e06a16de4408700b7fd3b590fdbb2c743f4c853919c203f58320b47c1781bf5"} err="failed to get container status \"6e06a16de4408700b7fd3b590fdbb2c743f4c853919c203f58320b47c1781bf5\": rpc error: code = NotFound desc = could not find container \"6e06a16de4408700b7fd3b590fdbb2c743f4c853919c203f58320b47c1781bf5\": container with ID starting with 6e06a16de4408700b7fd3b590fdbb2c743f4c853919c203f58320b47c1781bf5 not found: ID does not exist" Jan 31 04:14:34 crc kubenswrapper[4667]: I0131 04:14:34.513617 4667 scope.go:117] "RemoveContainer" containerID="65eecadfd81755185f37f3409e2caa7705cc0f431cf3a6a2c8d7393bb2f59317" Jan 31 04:14:34 crc kubenswrapper[4667]: E0131 04:14:34.513996 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65eecadfd81755185f37f3409e2caa7705cc0f431cf3a6a2c8d7393bb2f59317\": container with ID starting with 65eecadfd81755185f37f3409e2caa7705cc0f431cf3a6a2c8d7393bb2f59317 not found: ID does not exist" containerID="65eecadfd81755185f37f3409e2caa7705cc0f431cf3a6a2c8d7393bb2f59317" Jan 31 04:14:34 crc kubenswrapper[4667]: I0131 04:14:34.514031 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65eecadfd81755185f37f3409e2caa7705cc0f431cf3a6a2c8d7393bb2f59317"} err="failed to get container status \"65eecadfd81755185f37f3409e2caa7705cc0f431cf3a6a2c8d7393bb2f59317\": rpc error: code = NotFound desc = could not find container \"65eecadfd81755185f37f3409e2caa7705cc0f431cf3a6a2c8d7393bb2f59317\": container with ID starting with 65eecadfd81755185f37f3409e2caa7705cc0f431cf3a6a2c8d7393bb2f59317 not found: ID does not exist" Jan 31 04:14:35 crc kubenswrapper[4667]: I0131 04:14:35.297734 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e7822fc-744a-4507-b357-7d46bc595bce" path="/var/lib/kubelet/pods/2e7822fc-744a-4507-b357-7d46bc595bce/volumes" Jan 31 04:15:00 crc kubenswrapper[4667]: I0131 04:15:00.163491 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497215-ts69z"] Jan 31 04:15:00 crc kubenswrapper[4667]: E0131 04:15:00.164773 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e7822fc-744a-4507-b357-7d46bc595bce" containerName="registry-server" Jan 31 04:15:00 crc kubenswrapper[4667]: I0131 04:15:00.164816 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e7822fc-744a-4507-b357-7d46bc595bce" containerName="registry-server" Jan 31 04:15:00 crc kubenswrapper[4667]: E0131 04:15:00.164836 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e7822fc-744a-4507-b357-7d46bc595bce" containerName="extract-utilities" Jan 31 04:15:00 crc kubenswrapper[4667]: I0131 04:15:00.164863 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e7822fc-744a-4507-b357-7d46bc595bce" containerName="extract-utilities" Jan 31 04:15:00 crc kubenswrapper[4667]: E0131 04:15:00.164888 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e7822fc-744a-4507-b357-7d46bc595bce" containerName="extract-content" Jan 31 04:15:00 crc kubenswrapper[4667]: I0131 04:15:00.164896 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e7822fc-744a-4507-b357-7d46bc595bce" containerName="extract-content" Jan 31 04:15:00 crc kubenswrapper[4667]: I0131 04:15:00.165136 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e7822fc-744a-4507-b357-7d46bc595bce" containerName="registry-server" Jan 31 04:15:00 crc kubenswrapper[4667]: I0131 04:15:00.166087 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-ts69z" Jan 31 04:15:00 crc kubenswrapper[4667]: I0131 04:15:00.171805 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 04:15:00 crc kubenswrapper[4667]: I0131 04:15:00.172915 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 04:15:00 crc kubenswrapper[4667]: I0131 04:15:00.178483 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497215-ts69z"] Jan 31 04:15:00 crc kubenswrapper[4667]: I0131 04:15:00.259947 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fd5c864b-24e1-4c2d-86bf-a3b030fc98ab-secret-volume\") pod \"collect-profiles-29497215-ts69z\" (UID: \"fd5c864b-24e1-4c2d-86bf-a3b030fc98ab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-ts69z" Jan 31 04:15:00 crc kubenswrapper[4667]: I0131 04:15:00.260652 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6vbk\" (UniqueName: \"kubernetes.io/projected/fd5c864b-24e1-4c2d-86bf-a3b030fc98ab-kube-api-access-w6vbk\") pod \"collect-profiles-29497215-ts69z\" (UID: \"fd5c864b-24e1-4c2d-86bf-a3b030fc98ab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-ts69z" Jan 31 04:15:00 crc kubenswrapper[4667]: I0131 04:15:00.260896 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd5c864b-24e1-4c2d-86bf-a3b030fc98ab-config-volume\") pod \"collect-profiles-29497215-ts69z\" (UID: \"fd5c864b-24e1-4c2d-86bf-a3b030fc98ab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-ts69z" Jan 31 04:15:00 crc kubenswrapper[4667]: I0131 04:15:00.363324 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fd5c864b-24e1-4c2d-86bf-a3b030fc98ab-secret-volume\") pod \"collect-profiles-29497215-ts69z\" (UID: \"fd5c864b-24e1-4c2d-86bf-a3b030fc98ab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-ts69z" Jan 31 04:15:00 crc kubenswrapper[4667]: I0131 04:15:00.363999 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6vbk\" (UniqueName: \"kubernetes.io/projected/fd5c864b-24e1-4c2d-86bf-a3b030fc98ab-kube-api-access-w6vbk\") pod \"collect-profiles-29497215-ts69z\" (UID: \"fd5c864b-24e1-4c2d-86bf-a3b030fc98ab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-ts69z" Jan 31 04:15:00 crc kubenswrapper[4667]: I0131 04:15:00.364139 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd5c864b-24e1-4c2d-86bf-a3b030fc98ab-config-volume\") pod \"collect-profiles-29497215-ts69z\" (UID: \"fd5c864b-24e1-4c2d-86bf-a3b030fc98ab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-ts69z" Jan 31 04:15:00 crc kubenswrapper[4667]: I0131 04:15:00.365561 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd5c864b-24e1-4c2d-86bf-a3b030fc98ab-config-volume\") pod \"collect-profiles-29497215-ts69z\" (UID: \"fd5c864b-24e1-4c2d-86bf-a3b030fc98ab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-ts69z" Jan 31 04:15:00 crc kubenswrapper[4667]: I0131 04:15:00.379370 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fd5c864b-24e1-4c2d-86bf-a3b030fc98ab-secret-volume\") pod \"collect-profiles-29497215-ts69z\" (UID: \"fd5c864b-24e1-4c2d-86bf-a3b030fc98ab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-ts69z" Jan 31 04:15:00 crc kubenswrapper[4667]: I0131 04:15:00.387828 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6vbk\" (UniqueName: \"kubernetes.io/projected/fd5c864b-24e1-4c2d-86bf-a3b030fc98ab-kube-api-access-w6vbk\") pod \"collect-profiles-29497215-ts69z\" (UID: \"fd5c864b-24e1-4c2d-86bf-a3b030fc98ab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-ts69z" Jan 31 04:15:00 crc kubenswrapper[4667]: I0131 04:15:00.495003 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-ts69z" Jan 31 04:15:01 crc kubenswrapper[4667]: I0131 04:15:01.005475 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497215-ts69z"] Jan 31 04:15:01 crc kubenswrapper[4667]: I0131 04:15:01.715515 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-ts69z" event={"ID":"fd5c864b-24e1-4c2d-86bf-a3b030fc98ab","Type":"ContainerStarted","Data":"f734554215f06de504d368806cbfa4c8abea481e547c0599ce87adc52bc8f8c0"} Jan 31 04:15:01 crc kubenswrapper[4667]: I0131 04:15:01.716205 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-ts69z" event={"ID":"fd5c864b-24e1-4c2d-86bf-a3b030fc98ab","Type":"ContainerStarted","Data":"666fec363ee7f4e6ad9fb4501718bb2c0c694f6b803e03ae94d392acbb7d5750"} Jan 31 04:15:01 crc kubenswrapper[4667]: I0131 04:15:01.735064 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-ts69z" podStartSLOduration=1.735044771 podStartE2EDuration="1.735044771s" podCreationTimestamp="2026-01-31 04:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:15:01.733084949 +0000 UTC m=+1625.249420238" watchObservedRunningTime="2026-01-31 04:15:01.735044771 +0000 UTC m=+1625.251380070" Jan 31 04:15:02 crc kubenswrapper[4667]: I0131 04:15:02.736834 4667 generic.go:334] "Generic (PLEG): container finished" podID="fd5c864b-24e1-4c2d-86bf-a3b030fc98ab" containerID="f734554215f06de504d368806cbfa4c8abea481e547c0599ce87adc52bc8f8c0" exitCode=0 Jan 31 04:15:02 crc kubenswrapper[4667]: I0131 04:15:02.737377 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-ts69z" event={"ID":"fd5c864b-24e1-4c2d-86bf-a3b030fc98ab","Type":"ContainerDied","Data":"f734554215f06de504d368806cbfa4c8abea481e547c0599ce87adc52bc8f8c0"} Jan 31 04:15:04 crc kubenswrapper[4667]: I0131 04:15:04.131308 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-ts69z" Jan 31 04:15:04 crc kubenswrapper[4667]: I0131 04:15:04.164687 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fd5c864b-24e1-4c2d-86bf-a3b030fc98ab-secret-volume\") pod \"fd5c864b-24e1-4c2d-86bf-a3b030fc98ab\" (UID: \"fd5c864b-24e1-4c2d-86bf-a3b030fc98ab\") " Jan 31 04:15:04 crc kubenswrapper[4667]: I0131 04:15:04.165609 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6vbk\" (UniqueName: \"kubernetes.io/projected/fd5c864b-24e1-4c2d-86bf-a3b030fc98ab-kube-api-access-w6vbk\") pod \"fd5c864b-24e1-4c2d-86bf-a3b030fc98ab\" (UID: \"fd5c864b-24e1-4c2d-86bf-a3b030fc98ab\") " Jan 31 04:15:04 crc kubenswrapper[4667]: I0131 04:15:04.165779 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd5c864b-24e1-4c2d-86bf-a3b030fc98ab-config-volume\") pod \"fd5c864b-24e1-4c2d-86bf-a3b030fc98ab\" (UID: \"fd5c864b-24e1-4c2d-86bf-a3b030fc98ab\") " Jan 31 04:15:04 crc kubenswrapper[4667]: I0131 04:15:04.167019 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd5c864b-24e1-4c2d-86bf-a3b030fc98ab-config-volume" (OuterVolumeSpecName: "config-volume") pod "fd5c864b-24e1-4c2d-86bf-a3b030fc98ab" (UID: "fd5c864b-24e1-4c2d-86bf-a3b030fc98ab"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:15:04 crc kubenswrapper[4667]: I0131 04:15:04.215321 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd5c864b-24e1-4c2d-86bf-a3b030fc98ab-kube-api-access-w6vbk" (OuterVolumeSpecName: "kube-api-access-w6vbk") pod "fd5c864b-24e1-4c2d-86bf-a3b030fc98ab" (UID: "fd5c864b-24e1-4c2d-86bf-a3b030fc98ab"). InnerVolumeSpecName "kube-api-access-w6vbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:15:04 crc kubenswrapper[4667]: I0131 04:15:04.216787 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd5c864b-24e1-4c2d-86bf-a3b030fc98ab-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fd5c864b-24e1-4c2d-86bf-a3b030fc98ab" (UID: "fd5c864b-24e1-4c2d-86bf-a3b030fc98ab"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:15:04 crc kubenswrapper[4667]: I0131 04:15:04.268771 4667 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fd5c864b-24e1-4c2d-86bf-a3b030fc98ab-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 04:15:04 crc kubenswrapper[4667]: I0131 04:15:04.268815 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6vbk\" (UniqueName: \"kubernetes.io/projected/fd5c864b-24e1-4c2d-86bf-a3b030fc98ab-kube-api-access-w6vbk\") on node \"crc\" DevicePath \"\"" Jan 31 04:15:04 crc kubenswrapper[4667]: I0131 04:15:04.268825 4667 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd5c864b-24e1-4c2d-86bf-a3b030fc98ab-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 04:15:04 crc kubenswrapper[4667]: I0131 04:15:04.768065 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-ts69z" event={"ID":"fd5c864b-24e1-4c2d-86bf-a3b030fc98ab","Type":"ContainerDied","Data":"666fec363ee7f4e6ad9fb4501718bb2c0c694f6b803e03ae94d392acbb7d5750"} Jan 31 04:15:04 crc kubenswrapper[4667]: I0131 04:15:04.768130 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="666fec363ee7f4e6ad9fb4501718bb2c0c694f6b803e03ae94d392acbb7d5750" Jan 31 04:15:04 crc kubenswrapper[4667]: I0131 04:15:04.768215 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497215-ts69z" Jan 31 04:15:15 crc kubenswrapper[4667]: I0131 04:15:15.704478 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:15:15 crc kubenswrapper[4667]: I0131 04:15:15.705154 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:15:39 crc kubenswrapper[4667]: I0131 04:15:39.100765 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-b82f-account-create-update-df9j2"] Jan 31 04:15:39 crc kubenswrapper[4667]: I0131 04:15:39.116754 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-cwd7k"] Jan 31 04:15:39 crc kubenswrapper[4667]: I0131 04:15:39.124851 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-8891-account-create-update-b6rdx"] Jan 31 04:15:39 crc kubenswrapper[4667]: I0131 04:15:39.134779 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-85mm4"] Jan 31 04:15:39 crc kubenswrapper[4667]: I0131 04:15:39.147378 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-cwd7k"] Jan 31 04:15:39 crc kubenswrapper[4667]: I0131 04:15:39.161200 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-8891-account-create-update-b6rdx"] Jan 31 04:15:39 crc kubenswrapper[4667]: I0131 04:15:39.174157 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-85mm4"] Jan 31 04:15:39 crc kubenswrapper[4667]: I0131 04:15:39.183824 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-b82f-account-create-update-df9j2"] Jan 31 04:15:39 crc kubenswrapper[4667]: I0131 04:15:39.299091 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="343ee4a5-b9a5-48fe-863c-b668c87c384a" path="/var/lib/kubelet/pods/343ee4a5-b9a5-48fe-863c-b668c87c384a/volumes" Jan 31 04:15:39 crc kubenswrapper[4667]: I0131 04:15:39.301148 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a118818-9c6a-4477-8a09-84e63dd51c45" path="/var/lib/kubelet/pods/4a118818-9c6a-4477-8a09-84e63dd51c45/volumes" Jan 31 04:15:39 crc kubenswrapper[4667]: I0131 04:15:39.304372 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7744d973-bda8-482c-8c36-d3e9e7a484a4" path="/var/lib/kubelet/pods/7744d973-bda8-482c-8c36-d3e9e7a484a4/volumes" Jan 31 04:15:39 crc kubenswrapper[4667]: I0131 04:15:39.306650 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95e02c93-990d-43de-b11d-db36bc7524a6" path="/var/lib/kubelet/pods/95e02c93-990d-43de-b11d-db36bc7524a6/volumes" Jan 31 04:15:40 crc kubenswrapper[4667]: I0131 04:15:40.040817 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-95c8-account-create-update-kdk2t"] Jan 31 04:15:40 crc kubenswrapper[4667]: I0131 04:15:40.052787 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-95c8-account-create-update-kdk2t"] Jan 31 04:15:41 crc kubenswrapper[4667]: I0131 04:15:41.361649 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="342ccb2d-5b9e-433a-a8f8-9d074ee0887f" path="/var/lib/kubelet/pods/342ccb2d-5b9e-433a-a8f8-9d074ee0887f/volumes" Jan 31 04:15:45 crc kubenswrapper[4667]: I0131 04:15:45.705629 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:15:45 crc kubenswrapper[4667]: I0131 04:15:45.706341 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:15:51 crc kubenswrapper[4667]: I0131 04:15:51.065821 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-4flkf"] Jan 31 04:15:51 crc kubenswrapper[4667]: I0131 04:15:51.074907 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-4flkf"] Jan 31 04:15:51 crc kubenswrapper[4667]: I0131 04:15:51.348759 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a74e72f1-41d7-476c-abc1-00c32bfb03d8" path="/var/lib/kubelet/pods/a74e72f1-41d7-476c-abc1-00c32bfb03d8/volumes" Jan 31 04:16:15 crc kubenswrapper[4667]: I0131 04:16:15.704862 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:16:15 crc kubenswrapper[4667]: I0131 04:16:15.708279 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:16:15 crc kubenswrapper[4667]: I0131 04:16:15.708573 4667 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" Jan 31 04:16:15 crc kubenswrapper[4667]: I0131 04:16:15.710196 4667 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"52796184d23595b846472c11c5dceaaa8d9b03476b3cbc4f47edf0ad21ac1e50"} pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 04:16:15 crc kubenswrapper[4667]: I0131 04:16:15.710470 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" containerID="cri-o://52796184d23595b846472c11c5dceaaa8d9b03476b3cbc4f47edf0ad21ac1e50" gracePeriod=600 Jan 31 04:16:15 crc kubenswrapper[4667]: E0131 04:16:15.839884 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:16:16 crc kubenswrapper[4667]: I0131 04:16:16.635822 4667 generic.go:334] "Generic (PLEG): container finished" podID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerID="52796184d23595b846472c11c5dceaaa8d9b03476b3cbc4f47edf0ad21ac1e50" exitCode=0 Jan 31 04:16:16 crc kubenswrapper[4667]: I0131 04:16:16.635931 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" event={"ID":"b103bbd2-fb5d-4b2a-8b01-c32f699757df","Type":"ContainerDied","Data":"52796184d23595b846472c11c5dceaaa8d9b03476b3cbc4f47edf0ad21ac1e50"} Jan 31 04:16:16 crc kubenswrapper[4667]: I0131 04:16:16.636064 4667 scope.go:117] "RemoveContainer" containerID="f2541fc2fda6b826061d737e4a0c482f1977e25566cf6f78f58956c4922322ef" Jan 31 04:16:16 crc kubenswrapper[4667]: I0131 04:16:16.637429 4667 scope.go:117] "RemoveContainer" containerID="52796184d23595b846472c11c5dceaaa8d9b03476b3cbc4f47edf0ad21ac1e50" Jan 31 04:16:16 crc kubenswrapper[4667]: E0131 04:16:16.637972 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:16:21 crc kubenswrapper[4667]: I0131 04:16:21.054616 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-dvs8p"] Jan 31 04:16:21 crc kubenswrapper[4667]: I0131 04:16:21.064245 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-dvs8p"] Jan 31 04:16:21 crc kubenswrapper[4667]: I0131 04:16:21.298189 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fd7f92b-44c9-4765-99cf-9a42006f9f83" path="/var/lib/kubelet/pods/2fd7f92b-44c9-4765-99cf-9a42006f9f83/volumes" Jan 31 04:16:23 crc kubenswrapper[4667]: I0131 04:16:23.161947 4667 scope.go:117] "RemoveContainer" containerID="4907234e154b4af8f72d149a7fa846ab93ea3ae1bfe9f148e135a9abfc1476ec" Jan 31 04:16:23 crc kubenswrapper[4667]: I0131 04:16:23.208326 4667 scope.go:117] "RemoveContainer" containerID="545ca0dded484f0be0e5baf82d7730cea308935c228c45dbc72f6ebaadcea358" Jan 31 04:16:23 crc kubenswrapper[4667]: I0131 04:16:23.273656 4667 scope.go:117] "RemoveContainer" containerID="1644e21cd24370357d969e91285153a749284d12235479af8d2eabe76e8f328b" Jan 31 04:16:23 crc kubenswrapper[4667]: I0131 04:16:23.326668 4667 scope.go:117] "RemoveContainer" containerID="c6828019b6bb52b60ac5d4262021b0664d0f7e88a87ed5906e28173029b8ffcb" Jan 31 04:16:23 crc kubenswrapper[4667]: I0131 04:16:23.380671 4667 scope.go:117] "RemoveContainer" containerID="c9a697fcaa4cf6adf6c72ef0e2c7efa2f9b051de6210afd6a7295fb6aa211d05" Jan 31 04:16:23 crc kubenswrapper[4667]: I0131 04:16:23.442026 4667 scope.go:117] "RemoveContainer" containerID="84146adce4abc4bc3e847090dac8d5de1a3ebfc375b768f87f849339b6dda849" Jan 31 04:16:23 crc kubenswrapper[4667]: I0131 04:16:23.498728 4667 scope.go:117] "RemoveContainer" containerID="dac141f6b47a01ebadcfe1a91761940ab1b45d3096227a2682ccdf6430c0caf8" Jan 31 04:16:27 crc kubenswrapper[4667]: I0131 04:16:27.045217 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-dd40-account-create-update-kqrht"] Jan 31 04:16:27 crc kubenswrapper[4667]: I0131 04:16:27.055114 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-glth8"] Jan 31 04:16:27 crc kubenswrapper[4667]: I0131 04:16:27.069649 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-607a-account-create-update-ktk8n"] Jan 31 04:16:27 crc kubenswrapper[4667]: I0131 04:16:27.082970 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-dd40-account-create-update-kqrht"] Jan 31 04:16:27 crc kubenswrapper[4667]: I0131 04:16:27.095206 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-be21-account-create-update-jf2zc"] Jan 31 04:16:27 crc kubenswrapper[4667]: I0131 04:16:27.106377 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-dzwfc"] Jan 31 04:16:27 crc kubenswrapper[4667]: I0131 04:16:27.115996 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-be21-account-create-update-jf2zc"] Jan 31 04:16:27 crc kubenswrapper[4667]: I0131 04:16:27.123859 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-fmcdw"] Jan 31 04:16:27 crc kubenswrapper[4667]: I0131 04:16:27.131613 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-glth8"] Jan 31 04:16:27 crc kubenswrapper[4667]: I0131 04:16:27.139222 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-607a-account-create-update-ktk8n"] Jan 31 04:16:27 crc kubenswrapper[4667]: I0131 04:16:27.146580 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-fmcdw"] Jan 31 04:16:27 crc kubenswrapper[4667]: I0131 04:16:27.155920 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-dzwfc"] Jan 31 04:16:27 crc kubenswrapper[4667]: I0131 04:16:27.300673 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="007ac74b-f070-45f8-9cf9-1807ec2563f2" path="/var/lib/kubelet/pods/007ac74b-f070-45f8-9cf9-1807ec2563f2/volumes" Jan 31 04:16:27 crc kubenswrapper[4667]: I0131 04:16:27.302698 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="075dd640-8e38-4b34-b2fb-437599bbeb08" path="/var/lib/kubelet/pods/075dd640-8e38-4b34-b2fb-437599bbeb08/volumes" Jan 31 04:16:27 crc kubenswrapper[4667]: I0131 04:16:27.303343 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="120c8a1f-7144-4b39-9040-7ffc70da2eb2" path="/var/lib/kubelet/pods/120c8a1f-7144-4b39-9040-7ffc70da2eb2/volumes" Jan 31 04:16:27 crc kubenswrapper[4667]: I0131 04:16:27.304424 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ec06763-5d93-465b-ade2-557cc5072827" path="/var/lib/kubelet/pods/1ec06763-5d93-465b-ade2-557cc5072827/volumes" Jan 31 04:16:27 crc kubenswrapper[4667]: I0131 04:16:27.305614 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2edbaebb-5022-48a4-82ab-2cb5b23fae97" path="/var/lib/kubelet/pods/2edbaebb-5022-48a4-82ab-2cb5b23fae97/volumes" Jan 31 04:16:27 crc kubenswrapper[4667]: I0131 04:16:27.307467 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36961045-4f23-401f-92a0-2fe30920bdf6" path="/var/lib/kubelet/pods/36961045-4f23-401f-92a0-2fe30920bdf6/volumes" Jan 31 04:16:30 crc kubenswrapper[4667]: I0131 04:16:30.282261 4667 scope.go:117] "RemoveContainer" containerID="52796184d23595b846472c11c5dceaaa8d9b03476b3cbc4f47edf0ad21ac1e50" Jan 31 04:16:30 crc kubenswrapper[4667]: E0131 04:16:30.283126 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:16:30 crc kubenswrapper[4667]: I0131 04:16:30.850129 4667 generic.go:334] "Generic (PLEG): container finished" podID="24442823-d584-44f3-bf92-1e3382adb87f" containerID="4283f36ba209c17e7b4b790b8eff93244c1a6528122e1453a1d867948bf6086a" exitCode=0 Jan 31 04:16:30 crc kubenswrapper[4667]: I0131 04:16:30.850232 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r92t5" event={"ID":"24442823-d584-44f3-bf92-1e3382adb87f","Type":"ContainerDied","Data":"4283f36ba209c17e7b4b790b8eff93244c1a6528122e1453a1d867948bf6086a"} Jan 31 04:16:32 crc kubenswrapper[4667]: I0131 04:16:32.330524 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r92t5" Jan 31 04:16:32 crc kubenswrapper[4667]: I0131 04:16:32.384979 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24442823-d584-44f3-bf92-1e3382adb87f-bootstrap-combined-ca-bundle\") pod \"24442823-d584-44f3-bf92-1e3382adb87f\" (UID: \"24442823-d584-44f3-bf92-1e3382adb87f\") " Jan 31 04:16:32 crc kubenswrapper[4667]: I0131 04:16:32.385308 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vj55j\" (UniqueName: \"kubernetes.io/projected/24442823-d584-44f3-bf92-1e3382adb87f-kube-api-access-vj55j\") pod \"24442823-d584-44f3-bf92-1e3382adb87f\" (UID: \"24442823-d584-44f3-bf92-1e3382adb87f\") " Jan 31 04:16:32 crc kubenswrapper[4667]: I0131 04:16:32.385473 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24442823-d584-44f3-bf92-1e3382adb87f-ssh-key-openstack-edpm-ipam\") pod \"24442823-d584-44f3-bf92-1e3382adb87f\" (UID: \"24442823-d584-44f3-bf92-1e3382adb87f\") " Jan 31 04:16:32 crc kubenswrapper[4667]: I0131 04:16:32.385612 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24442823-d584-44f3-bf92-1e3382adb87f-inventory\") pod \"24442823-d584-44f3-bf92-1e3382adb87f\" (UID: \"24442823-d584-44f3-bf92-1e3382adb87f\") " Jan 31 04:16:32 crc kubenswrapper[4667]: I0131 04:16:32.393510 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24442823-d584-44f3-bf92-1e3382adb87f-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "24442823-d584-44f3-bf92-1e3382adb87f" (UID: "24442823-d584-44f3-bf92-1e3382adb87f"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:16:32 crc kubenswrapper[4667]: I0131 04:16:32.410121 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24442823-d584-44f3-bf92-1e3382adb87f-kube-api-access-vj55j" (OuterVolumeSpecName: "kube-api-access-vj55j") pod "24442823-d584-44f3-bf92-1e3382adb87f" (UID: "24442823-d584-44f3-bf92-1e3382adb87f"). InnerVolumeSpecName "kube-api-access-vj55j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:16:32 crc kubenswrapper[4667]: I0131 04:16:32.454423 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24442823-d584-44f3-bf92-1e3382adb87f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "24442823-d584-44f3-bf92-1e3382adb87f" (UID: "24442823-d584-44f3-bf92-1e3382adb87f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:16:32 crc kubenswrapper[4667]: I0131 04:16:32.455749 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24442823-d584-44f3-bf92-1e3382adb87f-inventory" (OuterVolumeSpecName: "inventory") pod "24442823-d584-44f3-bf92-1e3382adb87f" (UID: "24442823-d584-44f3-bf92-1e3382adb87f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:16:32 crc kubenswrapper[4667]: I0131 04:16:32.490301 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vj55j\" (UniqueName: \"kubernetes.io/projected/24442823-d584-44f3-bf92-1e3382adb87f-kube-api-access-vj55j\") on node \"crc\" DevicePath \"\"" Jan 31 04:16:32 crc kubenswrapper[4667]: I0131 04:16:32.490360 4667 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24442823-d584-44f3-bf92-1e3382adb87f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:16:32 crc kubenswrapper[4667]: I0131 04:16:32.490372 4667 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24442823-d584-44f3-bf92-1e3382adb87f-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 04:16:32 crc kubenswrapper[4667]: I0131 04:16:32.490387 4667 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24442823-d584-44f3-bf92-1e3382adb87f-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:16:32 crc kubenswrapper[4667]: I0131 04:16:32.904219 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r92t5" event={"ID":"24442823-d584-44f3-bf92-1e3382adb87f","Type":"ContainerDied","Data":"dd7666c17c4e4d59c2e3a3590700d866002f59c3492636358633b75ed6857941"} Jan 31 04:16:32 crc kubenswrapper[4667]: I0131 04:16:32.904698 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd7666c17c4e4d59c2e3a3590700d866002f59c3492636358633b75ed6857941" Jan 31 04:16:32 crc kubenswrapper[4667]: I0131 04:16:32.904378 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-r92t5" Jan 31 04:16:32 crc kubenswrapper[4667]: I0131 04:16:32.998282 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-r7phz"] Jan 31 04:16:32 crc kubenswrapper[4667]: E0131 04:16:32.998822 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd5c864b-24e1-4c2d-86bf-a3b030fc98ab" containerName="collect-profiles" Jan 31 04:16:32 crc kubenswrapper[4667]: I0131 04:16:32.998853 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd5c864b-24e1-4c2d-86bf-a3b030fc98ab" containerName="collect-profiles" Jan 31 04:16:32 crc kubenswrapper[4667]: E0131 04:16:32.998869 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24442823-d584-44f3-bf92-1e3382adb87f" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 31 04:16:32 crc kubenswrapper[4667]: I0131 04:16:32.998876 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="24442823-d584-44f3-bf92-1e3382adb87f" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 31 04:16:32 crc kubenswrapper[4667]: I0131 04:16:32.999075 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="24442823-d584-44f3-bf92-1e3382adb87f" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 31 04:16:32 crc kubenswrapper[4667]: I0131 04:16:32.999091 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd5c864b-24e1-4c2d-86bf-a3b030fc98ab" containerName="collect-profiles" Jan 31 04:16:33 crc kubenswrapper[4667]: I0131 04:16:32.999937 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-r7phz" Jan 31 04:16:33 crc kubenswrapper[4667]: I0131 04:16:33.005058 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 04:16:33 crc kubenswrapper[4667]: I0131 04:16:33.005390 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 04:16:33 crc kubenswrapper[4667]: I0131 04:16:33.006112 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 04:16:33 crc kubenswrapper[4667]: I0131 04:16:33.006435 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z7p2q" Jan 31 04:16:33 crc kubenswrapper[4667]: I0131 04:16:33.016506 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-r7phz"] Jan 31 04:16:33 crc kubenswrapper[4667]: I0131 04:16:33.105336 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrjw4\" (UniqueName: \"kubernetes.io/projected/15d1c9f5-7546-4262-ada1-71b362ddd67e-kube-api-access-wrjw4\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-r7phz\" (UID: \"15d1c9f5-7546-4262-ada1-71b362ddd67e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-r7phz" Jan 31 04:16:33 crc kubenswrapper[4667]: I0131 04:16:33.105673 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15d1c9f5-7546-4262-ada1-71b362ddd67e-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-r7phz\" (UID: \"15d1c9f5-7546-4262-ada1-71b362ddd67e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-r7phz" Jan 31 04:16:33 crc kubenswrapper[4667]: I0131 04:16:33.105877 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15d1c9f5-7546-4262-ada1-71b362ddd67e-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-r7phz\" (UID: \"15d1c9f5-7546-4262-ada1-71b362ddd67e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-r7phz" Jan 31 04:16:33 crc kubenswrapper[4667]: I0131 04:16:33.208076 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrjw4\" (UniqueName: \"kubernetes.io/projected/15d1c9f5-7546-4262-ada1-71b362ddd67e-kube-api-access-wrjw4\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-r7phz\" (UID: \"15d1c9f5-7546-4262-ada1-71b362ddd67e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-r7phz" Jan 31 04:16:33 crc kubenswrapper[4667]: I0131 04:16:33.208137 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15d1c9f5-7546-4262-ada1-71b362ddd67e-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-r7phz\" (UID: \"15d1c9f5-7546-4262-ada1-71b362ddd67e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-r7phz" Jan 31 04:16:33 crc kubenswrapper[4667]: I0131 04:16:33.208165 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15d1c9f5-7546-4262-ada1-71b362ddd67e-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-r7phz\" (UID: \"15d1c9f5-7546-4262-ada1-71b362ddd67e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-r7phz" Jan 31 04:16:33 crc kubenswrapper[4667]: I0131 04:16:33.214602 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15d1c9f5-7546-4262-ada1-71b362ddd67e-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-r7phz\" (UID: \"15d1c9f5-7546-4262-ada1-71b362ddd67e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-r7phz" Jan 31 04:16:33 crc kubenswrapper[4667]: I0131 04:16:33.214635 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15d1c9f5-7546-4262-ada1-71b362ddd67e-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-r7phz\" (UID: \"15d1c9f5-7546-4262-ada1-71b362ddd67e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-r7phz" Jan 31 04:16:33 crc kubenswrapper[4667]: I0131 04:16:33.232154 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrjw4\" (UniqueName: \"kubernetes.io/projected/15d1c9f5-7546-4262-ada1-71b362ddd67e-kube-api-access-wrjw4\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-r7phz\" (UID: \"15d1c9f5-7546-4262-ada1-71b362ddd67e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-r7phz" Jan 31 04:16:33 crc kubenswrapper[4667]: I0131 04:16:33.374235 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-r7phz" Jan 31 04:16:33 crc kubenswrapper[4667]: I0131 04:16:33.981333 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-r7phz"] Jan 31 04:16:34 crc kubenswrapper[4667]: I0131 04:16:34.000232 4667 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 04:16:34 crc kubenswrapper[4667]: I0131 04:16:34.925334 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-r7phz" event={"ID":"15d1c9f5-7546-4262-ada1-71b362ddd67e","Type":"ContainerStarted","Data":"1e034c2744ca5fc110eb598e1c30394e5ac6b6da4ee35d92560dbb0ce1ca2281"} Jan 31 04:16:34 crc kubenswrapper[4667]: I0131 04:16:34.926091 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-r7phz" event={"ID":"15d1c9f5-7546-4262-ada1-71b362ddd67e","Type":"ContainerStarted","Data":"ddd19de564be7e0f55c30fcaa2124d155ce21abc127ee67a3bc730e3411f10ad"} Jan 31 04:16:34 crc kubenswrapper[4667]: I0131 04:16:34.946458 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-r7phz" podStartSLOduration=2.531936198 podStartE2EDuration="2.946435083s" podCreationTimestamp="2026-01-31 04:16:32 +0000 UTC" firstStartedPulling="2026-01-31 04:16:33.999807524 +0000 UTC m=+1717.516142853" lastFinishedPulling="2026-01-31 04:16:34.414306429 +0000 UTC m=+1717.930641738" observedRunningTime="2026-01-31 04:16:34.941359228 +0000 UTC m=+1718.457694547" watchObservedRunningTime="2026-01-31 04:16:34.946435083 +0000 UTC m=+1718.462770382" Jan 31 04:16:36 crc kubenswrapper[4667]: I0131 04:16:36.046749 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-9mmhm"] Jan 31 04:16:36 crc kubenswrapper[4667]: I0131 04:16:36.064007 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-9mmhm"] Jan 31 04:16:37 crc kubenswrapper[4667]: I0131 04:16:37.042149 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-txc7n"] Jan 31 04:16:37 crc kubenswrapper[4667]: I0131 04:16:37.052807 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-txc7n"] Jan 31 04:16:37 crc kubenswrapper[4667]: I0131 04:16:37.295784 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78fd8a04-83bd-43d3-9a36-e116ecb3951a" path="/var/lib/kubelet/pods/78fd8a04-83bd-43d3-9a36-e116ecb3951a/volumes" Jan 31 04:16:37 crc kubenswrapper[4667]: I0131 04:16:37.297511 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="959a81ea-7cf7-4fc4-b84d-14699d4e6bb4" path="/var/lib/kubelet/pods/959a81ea-7cf7-4fc4-b84d-14699d4e6bb4/volumes" Jan 31 04:16:42 crc kubenswrapper[4667]: I0131 04:16:42.282637 4667 scope.go:117] "RemoveContainer" containerID="52796184d23595b846472c11c5dceaaa8d9b03476b3cbc4f47edf0ad21ac1e50" Jan 31 04:16:42 crc kubenswrapper[4667]: E0131 04:16:42.283830 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:16:57 crc kubenswrapper[4667]: I0131 04:16:57.291030 4667 scope.go:117] "RemoveContainer" containerID="52796184d23595b846472c11c5dceaaa8d9b03476b3cbc4f47edf0ad21ac1e50" Jan 31 04:16:57 crc kubenswrapper[4667]: E0131 04:16:57.292674 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:17:09 crc kubenswrapper[4667]: I0131 04:17:09.282555 4667 scope.go:117] "RemoveContainer" containerID="52796184d23595b846472c11c5dceaaa8d9b03476b3cbc4f47edf0ad21ac1e50" Jan 31 04:17:09 crc kubenswrapper[4667]: E0131 04:17:09.283859 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:17:19 crc kubenswrapper[4667]: I0131 04:17:19.045510 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-vvkc5"] Jan 31 04:17:19 crc kubenswrapper[4667]: I0131 04:17:19.057552 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-vvkc5"] Jan 31 04:17:19 crc kubenswrapper[4667]: I0131 04:17:19.298538 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b58d4b49-fb58-480e-9a43-2675ce1fc0c1" path="/var/lib/kubelet/pods/b58d4b49-fb58-480e-9a43-2675ce1fc0c1/volumes" Jan 31 04:17:22 crc kubenswrapper[4667]: I0131 04:17:22.282076 4667 scope.go:117] "RemoveContainer" containerID="52796184d23595b846472c11c5dceaaa8d9b03476b3cbc4f47edf0ad21ac1e50" Jan 31 04:17:22 crc kubenswrapper[4667]: E0131 04:17:22.282662 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:17:23 crc kubenswrapper[4667]: I0131 04:17:23.716776 4667 scope.go:117] "RemoveContainer" containerID="49e6db4ed43ae57cf2c262faf58a2c2951eec05b799f08020e9a3f1a594aeac7" Jan 31 04:17:23 crc kubenswrapper[4667]: I0131 04:17:23.770226 4667 scope.go:117] "RemoveContainer" containerID="58ca5e7bd87e3f545c7358c0d9d32b5178f7946fa0a397d40b132f23ec981a5e" Jan 31 04:17:23 crc kubenswrapper[4667]: I0131 04:17:23.845787 4667 scope.go:117] "RemoveContainer" containerID="0ad35232b5f12cc5e741509fe5b5e67f9edcd05adc462628411a16d0ba90f272" Jan 31 04:17:23 crc kubenswrapper[4667]: I0131 04:17:23.908557 4667 scope.go:117] "RemoveContainer" containerID="e700e19d2ca5598715b45a8e1325e3ea36ecd48739853706c79597b236a0b2a9" Jan 31 04:17:23 crc kubenswrapper[4667]: I0131 04:17:23.972894 4667 scope.go:117] "RemoveContainer" containerID="41a9455c5718466cb3f2d2f81ba67e00d4664bf2285080baa0af12463207492a" Jan 31 04:17:24 crc kubenswrapper[4667]: I0131 04:17:24.024232 4667 scope.go:117] "RemoveContainer" containerID="49a2e52ab6872fb3a86d88661813b708e00bc2f970f66d3202030bec584e4a8d" Jan 31 04:17:24 crc kubenswrapper[4667]: I0131 04:17:24.083386 4667 scope.go:117] "RemoveContainer" containerID="f0fbfb66c2cd178083036c05278c819f4f045ef896b9882c36534e16f0433fc5" Jan 31 04:17:24 crc kubenswrapper[4667]: I0131 04:17:24.117686 4667 scope.go:117] "RemoveContainer" containerID="8f526d3fc408b027b2984bb6a4129f491e7482d1311b67474d228372ac47d7e0" Jan 31 04:17:24 crc kubenswrapper[4667]: I0131 04:17:24.149524 4667 scope.go:117] "RemoveContainer" containerID="2b8a1c25c88ebf53471e593f87e8aceaf28ff19cc1ab03ea3a241d6cd5619274" Jan 31 04:17:34 crc kubenswrapper[4667]: I0131 04:17:34.051063 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-w9cj2"] Jan 31 04:17:34 crc kubenswrapper[4667]: I0131 04:17:34.062499 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-w9cj2"] Jan 31 04:17:34 crc kubenswrapper[4667]: I0131 04:17:34.071253 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-mkdm4"] Jan 31 04:17:34 crc kubenswrapper[4667]: I0131 04:17:34.079860 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-mkdm4"] Jan 31 04:17:34 crc kubenswrapper[4667]: I0131 04:17:34.281709 4667 scope.go:117] "RemoveContainer" containerID="52796184d23595b846472c11c5dceaaa8d9b03476b3cbc4f47edf0ad21ac1e50" Jan 31 04:17:34 crc kubenswrapper[4667]: E0131 04:17:34.282936 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:17:35 crc kubenswrapper[4667]: I0131 04:17:35.305417 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e23be1c-6ab2-442e-b12e-e4083c274a67" path="/var/lib/kubelet/pods/6e23be1c-6ab2-442e-b12e-e4083c274a67/volumes" Jan 31 04:17:35 crc kubenswrapper[4667]: I0131 04:17:35.308160 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d7dc1b5-7662-4687-964b-b3e21fce9e06" path="/var/lib/kubelet/pods/8d7dc1b5-7662-4687-964b-b3e21fce9e06/volumes" Jan 31 04:17:46 crc kubenswrapper[4667]: I0131 04:17:46.282825 4667 scope.go:117] "RemoveContainer" containerID="52796184d23595b846472c11c5dceaaa8d9b03476b3cbc4f47edf0ad21ac1e50" Jan 31 04:17:46 crc kubenswrapper[4667]: E0131 04:17:46.284399 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:17:48 crc kubenswrapper[4667]: I0131 04:17:48.074670 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-245c9"] Jan 31 04:17:48 crc kubenswrapper[4667]: I0131 04:17:48.089400 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-245c9"] Jan 31 04:17:49 crc kubenswrapper[4667]: I0131 04:17:49.295145 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc9db8ae-2f60-4efd-9a11-4aac5f336900" path="/var/lib/kubelet/pods/bc9db8ae-2f60-4efd-9a11-4aac5f336900/volumes" Jan 31 04:17:51 crc kubenswrapper[4667]: I0131 04:17:51.049566 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-4nj2p"] Jan 31 04:17:51 crc kubenswrapper[4667]: I0131 04:17:51.069223 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-4nj2p"] Jan 31 04:17:51 crc kubenswrapper[4667]: I0131 04:17:51.310097 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b6bac61-1103-438b-9e75-f3d6b6902270" path="/var/lib/kubelet/pods/7b6bac61-1103-438b-9e75-f3d6b6902270/volumes" Jan 31 04:17:59 crc kubenswrapper[4667]: I0131 04:17:59.282826 4667 scope.go:117] "RemoveContainer" containerID="52796184d23595b846472c11c5dceaaa8d9b03476b3cbc4f47edf0ad21ac1e50" Jan 31 04:17:59 crc kubenswrapper[4667]: E0131 04:17:59.285689 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:18:14 crc kubenswrapper[4667]: I0131 04:18:14.282224 4667 scope.go:117] "RemoveContainer" containerID="52796184d23595b846472c11c5dceaaa8d9b03476b3cbc4f47edf0ad21ac1e50" Jan 31 04:18:14 crc kubenswrapper[4667]: E0131 04:18:14.283106 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:18:18 crc kubenswrapper[4667]: I0131 04:18:18.468126 4667 generic.go:334] "Generic (PLEG): container finished" podID="15d1c9f5-7546-4262-ada1-71b362ddd67e" containerID="1e034c2744ca5fc110eb598e1c30394e5ac6b6da4ee35d92560dbb0ce1ca2281" exitCode=0 Jan 31 04:18:18 crc kubenswrapper[4667]: I0131 04:18:18.468252 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-r7phz" event={"ID":"15d1c9f5-7546-4262-ada1-71b362ddd67e","Type":"ContainerDied","Data":"1e034c2744ca5fc110eb598e1c30394e5ac6b6da4ee35d92560dbb0ce1ca2281"} Jan 31 04:18:19 crc kubenswrapper[4667]: I0131 04:18:19.921822 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-r7phz" Jan 31 04:18:20 crc kubenswrapper[4667]: I0131 04:18:20.111625 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrjw4\" (UniqueName: \"kubernetes.io/projected/15d1c9f5-7546-4262-ada1-71b362ddd67e-kube-api-access-wrjw4\") pod \"15d1c9f5-7546-4262-ada1-71b362ddd67e\" (UID: \"15d1c9f5-7546-4262-ada1-71b362ddd67e\") " Jan 31 04:18:20 crc kubenswrapper[4667]: I0131 04:18:20.111951 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15d1c9f5-7546-4262-ada1-71b362ddd67e-ssh-key-openstack-edpm-ipam\") pod \"15d1c9f5-7546-4262-ada1-71b362ddd67e\" (UID: \"15d1c9f5-7546-4262-ada1-71b362ddd67e\") " Jan 31 04:18:20 crc kubenswrapper[4667]: I0131 04:18:20.112079 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15d1c9f5-7546-4262-ada1-71b362ddd67e-inventory\") pod \"15d1c9f5-7546-4262-ada1-71b362ddd67e\" (UID: \"15d1c9f5-7546-4262-ada1-71b362ddd67e\") " Jan 31 04:18:20 crc kubenswrapper[4667]: I0131 04:18:20.120909 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15d1c9f5-7546-4262-ada1-71b362ddd67e-kube-api-access-wrjw4" (OuterVolumeSpecName: "kube-api-access-wrjw4") pod "15d1c9f5-7546-4262-ada1-71b362ddd67e" (UID: "15d1c9f5-7546-4262-ada1-71b362ddd67e"). InnerVolumeSpecName "kube-api-access-wrjw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:18:20 crc kubenswrapper[4667]: I0131 04:18:20.149571 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15d1c9f5-7546-4262-ada1-71b362ddd67e-inventory" (OuterVolumeSpecName: "inventory") pod "15d1c9f5-7546-4262-ada1-71b362ddd67e" (UID: "15d1c9f5-7546-4262-ada1-71b362ddd67e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:18:20 crc kubenswrapper[4667]: I0131 04:18:20.151354 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15d1c9f5-7546-4262-ada1-71b362ddd67e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "15d1c9f5-7546-4262-ada1-71b362ddd67e" (UID: "15d1c9f5-7546-4262-ada1-71b362ddd67e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:18:20 crc kubenswrapper[4667]: I0131 04:18:20.215405 4667 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/15d1c9f5-7546-4262-ada1-71b362ddd67e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:18:20 crc kubenswrapper[4667]: I0131 04:18:20.215454 4667 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15d1c9f5-7546-4262-ada1-71b362ddd67e-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 04:18:20 crc kubenswrapper[4667]: I0131 04:18:20.215480 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrjw4\" (UniqueName: \"kubernetes.io/projected/15d1c9f5-7546-4262-ada1-71b362ddd67e-kube-api-access-wrjw4\") on node \"crc\" DevicePath \"\"" Jan 31 04:18:20 crc kubenswrapper[4667]: I0131 04:18:20.497132 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-r7phz" event={"ID":"15d1c9f5-7546-4262-ada1-71b362ddd67e","Type":"ContainerDied","Data":"ddd19de564be7e0f55c30fcaa2124d155ce21abc127ee67a3bc730e3411f10ad"} Jan 31 04:18:20 crc kubenswrapper[4667]: I0131 04:18:20.497214 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddd19de564be7e0f55c30fcaa2124d155ce21abc127ee67a3bc730e3411f10ad" Jan 31 04:18:20 crc kubenswrapper[4667]: I0131 04:18:20.497232 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-r7phz" Jan 31 04:18:20 crc kubenswrapper[4667]: I0131 04:18:20.630093 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jpglr"] Jan 31 04:18:20 crc kubenswrapper[4667]: E0131 04:18:20.630554 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15d1c9f5-7546-4262-ada1-71b362ddd67e" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 31 04:18:20 crc kubenswrapper[4667]: I0131 04:18:20.630573 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="15d1c9f5-7546-4262-ada1-71b362ddd67e" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 31 04:18:20 crc kubenswrapper[4667]: I0131 04:18:20.630830 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="15d1c9f5-7546-4262-ada1-71b362ddd67e" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 31 04:18:20 crc kubenswrapper[4667]: I0131 04:18:20.632656 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jpglr" Jan 31 04:18:20 crc kubenswrapper[4667]: I0131 04:18:20.636120 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 04:18:20 crc kubenswrapper[4667]: I0131 04:18:20.636549 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 04:18:20 crc kubenswrapper[4667]: I0131 04:18:20.638432 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z7p2q" Jan 31 04:18:20 crc kubenswrapper[4667]: I0131 04:18:20.638663 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 04:18:20 crc kubenswrapper[4667]: I0131 04:18:20.657558 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jpglr"] Jan 31 04:18:20 crc kubenswrapper[4667]: I0131 04:18:20.731066 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c49961f-cfd8-428d-b32b-4e3f85e554d5-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jpglr\" (UID: \"2c49961f-cfd8-428d-b32b-4e3f85e554d5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jpglr" Jan 31 04:18:20 crc kubenswrapper[4667]: I0131 04:18:20.731460 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67bbl\" (UniqueName: \"kubernetes.io/projected/2c49961f-cfd8-428d-b32b-4e3f85e554d5-kube-api-access-67bbl\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jpglr\" (UID: \"2c49961f-cfd8-428d-b32b-4e3f85e554d5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jpglr" Jan 31 04:18:20 crc kubenswrapper[4667]: I0131 04:18:20.731603 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2c49961f-cfd8-428d-b32b-4e3f85e554d5-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jpglr\" (UID: \"2c49961f-cfd8-428d-b32b-4e3f85e554d5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jpglr" Jan 31 04:18:20 crc kubenswrapper[4667]: I0131 04:18:20.835249 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c49961f-cfd8-428d-b32b-4e3f85e554d5-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jpglr\" (UID: \"2c49961f-cfd8-428d-b32b-4e3f85e554d5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jpglr" Jan 31 04:18:20 crc kubenswrapper[4667]: I0131 04:18:20.835374 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67bbl\" (UniqueName: \"kubernetes.io/projected/2c49961f-cfd8-428d-b32b-4e3f85e554d5-kube-api-access-67bbl\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jpglr\" (UID: \"2c49961f-cfd8-428d-b32b-4e3f85e554d5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jpglr" Jan 31 04:18:20 crc kubenswrapper[4667]: I0131 04:18:20.835414 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2c49961f-cfd8-428d-b32b-4e3f85e554d5-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jpglr\" (UID: \"2c49961f-cfd8-428d-b32b-4e3f85e554d5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jpglr" Jan 31 04:18:20 crc kubenswrapper[4667]: I0131 04:18:20.842614 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c49961f-cfd8-428d-b32b-4e3f85e554d5-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jpglr\" (UID: \"2c49961f-cfd8-428d-b32b-4e3f85e554d5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jpglr" Jan 31 04:18:20 crc kubenswrapper[4667]: I0131 04:18:20.842922 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2c49961f-cfd8-428d-b32b-4e3f85e554d5-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jpglr\" (UID: \"2c49961f-cfd8-428d-b32b-4e3f85e554d5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jpglr" Jan 31 04:18:20 crc kubenswrapper[4667]: I0131 04:18:20.862671 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67bbl\" (UniqueName: \"kubernetes.io/projected/2c49961f-cfd8-428d-b32b-4e3f85e554d5-kube-api-access-67bbl\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jpglr\" (UID: \"2c49961f-cfd8-428d-b32b-4e3f85e554d5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jpglr" Jan 31 04:18:20 crc kubenswrapper[4667]: I0131 04:18:20.971457 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jpglr" Jan 31 04:18:21 crc kubenswrapper[4667]: I0131 04:18:21.645049 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jpglr"] Jan 31 04:18:22 crc kubenswrapper[4667]: I0131 04:18:22.536960 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jpglr" event={"ID":"2c49961f-cfd8-428d-b32b-4e3f85e554d5","Type":"ContainerStarted","Data":"0282bc28c2e0f163872b21114ebcb4973df5a2291ba3bfbe9848581bffde6aac"} Jan 31 04:18:22 crc kubenswrapper[4667]: I0131 04:18:22.537489 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jpglr" event={"ID":"2c49961f-cfd8-428d-b32b-4e3f85e554d5","Type":"ContainerStarted","Data":"4747b2a59cf9da12ed235e5bd56ec151e9905ac7f8236c2e7af0f63c6235834a"} Jan 31 04:18:22 crc kubenswrapper[4667]: I0131 04:18:22.583869 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jpglr" podStartSLOduration=2.03404354 podStartE2EDuration="2.583814582s" podCreationTimestamp="2026-01-31 04:18:20 +0000 UTC" firstStartedPulling="2026-01-31 04:18:21.643278587 +0000 UTC m=+1825.159613896" lastFinishedPulling="2026-01-31 04:18:22.193049639 +0000 UTC m=+1825.709384938" observedRunningTime="2026-01-31 04:18:22.57240078 +0000 UTC m=+1826.088736099" watchObservedRunningTime="2026-01-31 04:18:22.583814582 +0000 UTC m=+1826.100149901" Jan 31 04:18:24 crc kubenswrapper[4667]: I0131 04:18:24.382998 4667 scope.go:117] "RemoveContainer" containerID="4c202cc92d71eb2da32ef44d2be02ac9d3194bb7b9c71a71d07945f8521093e4" Jan 31 04:18:24 crc kubenswrapper[4667]: I0131 04:18:24.431580 4667 scope.go:117] "RemoveContainer" containerID="0b580b6dd82d2193fa73ab8fbf259431448619ef80802ef62c949f8363ee652d" Jan 31 04:18:24 crc kubenswrapper[4667]: I0131 04:18:24.483773 4667 scope.go:117] "RemoveContainer" containerID="880da3e2ef396b2fc27ef70d4c80b64e4d0e98ac62c00ed10919b5350be3803a" Jan 31 04:18:24 crc kubenswrapper[4667]: I0131 04:18:24.550675 4667 scope.go:117] "RemoveContainer" containerID="a2ba612c47c6a1009fc72ff61e30d9cf1ee4813472358fc9f7e831e6c727188b" Jan 31 04:18:26 crc kubenswrapper[4667]: I0131 04:18:26.281623 4667 scope.go:117] "RemoveContainer" containerID="52796184d23595b846472c11c5dceaaa8d9b03476b3cbc4f47edf0ad21ac1e50" Jan 31 04:18:26 crc kubenswrapper[4667]: E0131 04:18:26.282510 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:18:40 crc kubenswrapper[4667]: I0131 04:18:40.283149 4667 scope.go:117] "RemoveContainer" containerID="52796184d23595b846472c11c5dceaaa8d9b03476b3cbc4f47edf0ad21ac1e50" Jan 31 04:18:40 crc kubenswrapper[4667]: E0131 04:18:40.284575 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:18:51 crc kubenswrapper[4667]: I0131 04:18:51.282228 4667 scope.go:117] "RemoveContainer" containerID="52796184d23595b846472c11c5dceaaa8d9b03476b3cbc4f47edf0ad21ac1e50" Jan 31 04:18:51 crc kubenswrapper[4667]: E0131 04:18:51.283897 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:19:05 crc kubenswrapper[4667]: I0131 04:19:05.284271 4667 scope.go:117] "RemoveContainer" containerID="52796184d23595b846472c11c5dceaaa8d9b03476b3cbc4f47edf0ad21ac1e50" Jan 31 04:19:05 crc kubenswrapper[4667]: E0131 04:19:05.286220 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:19:19 crc kubenswrapper[4667]: I0131 04:19:19.282366 4667 scope.go:117] "RemoveContainer" containerID="52796184d23595b846472c11c5dceaaa8d9b03476b3cbc4f47edf0ad21ac1e50" Jan 31 04:19:19 crc kubenswrapper[4667]: E0131 04:19:19.283546 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:19:20 crc kubenswrapper[4667]: I0131 04:19:20.070965 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-x9gc7"] Jan 31 04:19:20 crc kubenswrapper[4667]: I0131 04:19:20.079419 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-8q9zm"] Jan 31 04:19:20 crc kubenswrapper[4667]: I0131 04:19:20.091569 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-8q9zm"] Jan 31 04:19:20 crc kubenswrapper[4667]: I0131 04:19:20.099797 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-5055-account-create-update-2rcb7"] Jan 31 04:19:20 crc kubenswrapper[4667]: I0131 04:19:20.112954 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-x9gc7"] Jan 31 04:19:20 crc kubenswrapper[4667]: I0131 04:19:20.121770 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-5055-account-create-update-2rcb7"] Jan 31 04:19:20 crc kubenswrapper[4667]: I0131 04:19:20.129191 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-bnv28"] Jan 31 04:19:20 crc kubenswrapper[4667]: I0131 04:19:20.135681 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-32cf-account-create-update-hsfjj"] Jan 31 04:19:20 crc kubenswrapper[4667]: I0131 04:19:20.143055 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-916e-account-create-update-ps447"] Jan 31 04:19:20 crc kubenswrapper[4667]: I0131 04:19:20.157817 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-bnv28"] Jan 31 04:19:20 crc kubenswrapper[4667]: I0131 04:19:20.166669 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-916e-account-create-update-ps447"] Jan 31 04:19:20 crc kubenswrapper[4667]: I0131 04:19:20.176223 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-32cf-account-create-update-hsfjj"] Jan 31 04:19:21 crc kubenswrapper[4667]: I0131 04:19:21.310959 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="492ab7ad-86ee-4c40-a924-bd5cd948a4dd" path="/var/lib/kubelet/pods/492ab7ad-86ee-4c40-a924-bd5cd948a4dd/volumes" Jan 31 04:19:21 crc kubenswrapper[4667]: I0131 04:19:21.312140 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b1f7066-97d5-4fbf-915e-06d9ed522000" path="/var/lib/kubelet/pods/6b1f7066-97d5-4fbf-915e-06d9ed522000/volumes" Jan 31 04:19:21 crc kubenswrapper[4667]: I0131 04:19:21.312705 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73be4b20-cf7a-430b-995f-07f3475b064c" path="/var/lib/kubelet/pods/73be4b20-cf7a-430b-995f-07f3475b064c/volumes" Jan 31 04:19:21 crc kubenswrapper[4667]: I0131 04:19:21.313323 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a405aa98-d9c4-4ee1-90bb-da0cc8e09301" path="/var/lib/kubelet/pods/a405aa98-d9c4-4ee1-90bb-da0cc8e09301/volumes" Jan 31 04:19:21 crc kubenswrapper[4667]: I0131 04:19:21.314428 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be0254c4-04b0-44bb-96dd-69a9538a9f9e" path="/var/lib/kubelet/pods/be0254c4-04b0-44bb-96dd-69a9538a9f9e/volumes" Jan 31 04:19:21 crc kubenswrapper[4667]: I0131 04:19:21.315107 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cce7e8e8-08b1-41ef-b1d4-efaba433bec0" path="/var/lib/kubelet/pods/cce7e8e8-08b1-41ef-b1d4-efaba433bec0/volumes" Jan 31 04:19:24 crc kubenswrapper[4667]: I0131 04:19:24.684288 4667 scope.go:117] "RemoveContainer" containerID="c5172dd4ff35c2f9ff9790dc52aa49679c1a8d78f8091e15f8d887b09ca20690" Jan 31 04:19:24 crc kubenswrapper[4667]: I0131 04:19:24.722246 4667 scope.go:117] "RemoveContainer" containerID="5489fab3329bbf4878f6363efb6c6b7f3da2406c445cd91f0a1da36d4af710b1" Jan 31 04:19:24 crc kubenswrapper[4667]: I0131 04:19:24.798703 4667 scope.go:117] "RemoveContainer" containerID="14d55ff3e972f1bbe1669d82936c183500a7124309b34ea687681c33dd1ea204" Jan 31 04:19:24 crc kubenswrapper[4667]: I0131 04:19:24.846799 4667 scope.go:117] "RemoveContainer" containerID="90d1156965a1f3f60aa54b48a50767575cbe330c7cebf3ed553a11ea924b4ba5" Jan 31 04:19:24 crc kubenswrapper[4667]: I0131 04:19:24.886114 4667 scope.go:117] "RemoveContainer" containerID="03331c518d65e5bd3c7066eaaa3a6c4c87cb6d8b666026c64d9556ea17d9576a" Jan 31 04:19:24 crc kubenswrapper[4667]: I0131 04:19:24.930376 4667 scope.go:117] "RemoveContainer" containerID="df2769d8d82b91d8ab5e821aac77e93796beaec66eb7bfe9a8f0a555e949ec26" Jan 31 04:19:34 crc kubenswrapper[4667]: I0131 04:19:34.282357 4667 scope.go:117] "RemoveContainer" containerID="52796184d23595b846472c11c5dceaaa8d9b03476b3cbc4f47edf0ad21ac1e50" Jan 31 04:19:34 crc kubenswrapper[4667]: E0131 04:19:34.283533 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:19:34 crc kubenswrapper[4667]: I0131 04:19:34.476531 4667 generic.go:334] "Generic (PLEG): container finished" podID="2c49961f-cfd8-428d-b32b-4e3f85e554d5" containerID="0282bc28c2e0f163872b21114ebcb4973df5a2291ba3bfbe9848581bffde6aac" exitCode=0 Jan 31 04:19:34 crc kubenswrapper[4667]: I0131 04:19:34.476597 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jpglr" event={"ID":"2c49961f-cfd8-428d-b32b-4e3f85e554d5","Type":"ContainerDied","Data":"0282bc28c2e0f163872b21114ebcb4973df5a2291ba3bfbe9848581bffde6aac"} Jan 31 04:19:35 crc kubenswrapper[4667]: I0131 04:19:35.979142 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jpglr" Jan 31 04:19:36 crc kubenswrapper[4667]: I0131 04:19:36.173695 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c49961f-cfd8-428d-b32b-4e3f85e554d5-inventory\") pod \"2c49961f-cfd8-428d-b32b-4e3f85e554d5\" (UID: \"2c49961f-cfd8-428d-b32b-4e3f85e554d5\") " Jan 31 04:19:36 crc kubenswrapper[4667]: I0131 04:19:36.174103 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2c49961f-cfd8-428d-b32b-4e3f85e554d5-ssh-key-openstack-edpm-ipam\") pod \"2c49961f-cfd8-428d-b32b-4e3f85e554d5\" (UID: \"2c49961f-cfd8-428d-b32b-4e3f85e554d5\") " Jan 31 04:19:36 crc kubenswrapper[4667]: I0131 04:19:36.174998 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67bbl\" (UniqueName: \"kubernetes.io/projected/2c49961f-cfd8-428d-b32b-4e3f85e554d5-kube-api-access-67bbl\") pod \"2c49961f-cfd8-428d-b32b-4e3f85e554d5\" (UID: \"2c49961f-cfd8-428d-b32b-4e3f85e554d5\") " Jan 31 04:19:36 crc kubenswrapper[4667]: I0131 04:19:36.196303 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c49961f-cfd8-428d-b32b-4e3f85e554d5-kube-api-access-67bbl" (OuterVolumeSpecName: "kube-api-access-67bbl") pod "2c49961f-cfd8-428d-b32b-4e3f85e554d5" (UID: "2c49961f-cfd8-428d-b32b-4e3f85e554d5"). InnerVolumeSpecName "kube-api-access-67bbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:19:36 crc kubenswrapper[4667]: I0131 04:19:36.205778 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c49961f-cfd8-428d-b32b-4e3f85e554d5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2c49961f-cfd8-428d-b32b-4e3f85e554d5" (UID: "2c49961f-cfd8-428d-b32b-4e3f85e554d5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:19:36 crc kubenswrapper[4667]: I0131 04:19:36.215458 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c49961f-cfd8-428d-b32b-4e3f85e554d5-inventory" (OuterVolumeSpecName: "inventory") pod "2c49961f-cfd8-428d-b32b-4e3f85e554d5" (UID: "2c49961f-cfd8-428d-b32b-4e3f85e554d5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:19:36 crc kubenswrapper[4667]: I0131 04:19:36.279995 4667 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2c49961f-cfd8-428d-b32b-4e3f85e554d5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:19:36 crc kubenswrapper[4667]: I0131 04:19:36.281108 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67bbl\" (UniqueName: \"kubernetes.io/projected/2c49961f-cfd8-428d-b32b-4e3f85e554d5-kube-api-access-67bbl\") on node \"crc\" DevicePath \"\"" Jan 31 04:19:36 crc kubenswrapper[4667]: I0131 04:19:36.281149 4667 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c49961f-cfd8-428d-b32b-4e3f85e554d5-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 04:19:36 crc kubenswrapper[4667]: I0131 04:19:36.501225 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jpglr" event={"ID":"2c49961f-cfd8-428d-b32b-4e3f85e554d5","Type":"ContainerDied","Data":"4747b2a59cf9da12ed235e5bd56ec151e9905ac7f8236c2e7af0f63c6235834a"} Jan 31 04:19:36 crc kubenswrapper[4667]: I0131 04:19:36.501292 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4747b2a59cf9da12ed235e5bd56ec151e9905ac7f8236c2e7af0f63c6235834a" Jan 31 04:19:36 crc kubenswrapper[4667]: I0131 04:19:36.501336 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jpglr" Jan 31 04:19:36 crc kubenswrapper[4667]: I0131 04:19:36.640878 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-972xp"] Jan 31 04:19:36 crc kubenswrapper[4667]: E0131 04:19:36.641534 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c49961f-cfd8-428d-b32b-4e3f85e554d5" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 31 04:19:36 crc kubenswrapper[4667]: I0131 04:19:36.641605 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c49961f-cfd8-428d-b32b-4e3f85e554d5" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 31 04:19:36 crc kubenswrapper[4667]: I0131 04:19:36.641944 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c49961f-cfd8-428d-b32b-4e3f85e554d5" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 31 04:19:36 crc kubenswrapper[4667]: I0131 04:19:36.642702 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-972xp" Jan 31 04:19:36 crc kubenswrapper[4667]: I0131 04:19:36.645379 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 04:19:36 crc kubenswrapper[4667]: I0131 04:19:36.645777 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z7p2q" Jan 31 04:19:36 crc kubenswrapper[4667]: I0131 04:19:36.646027 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 04:19:36 crc kubenswrapper[4667]: I0131 04:19:36.647856 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 04:19:36 crc kubenswrapper[4667]: I0131 04:19:36.658789 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-972xp"] Jan 31 04:19:36 crc kubenswrapper[4667]: I0131 04:19:36.693955 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/39f585ed-5556-4f88-b5c0-3b6da9807764-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-972xp\" (UID: \"39f585ed-5556-4f88-b5c0-3b6da9807764\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-972xp" Jan 31 04:19:36 crc kubenswrapper[4667]: I0131 04:19:36.694291 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g788\" (UniqueName: \"kubernetes.io/projected/39f585ed-5556-4f88-b5c0-3b6da9807764-kube-api-access-5g788\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-972xp\" (UID: \"39f585ed-5556-4f88-b5c0-3b6da9807764\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-972xp" Jan 31 04:19:36 crc kubenswrapper[4667]: I0131 04:19:36.694407 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39f585ed-5556-4f88-b5c0-3b6da9807764-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-972xp\" (UID: \"39f585ed-5556-4f88-b5c0-3b6da9807764\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-972xp" Jan 31 04:19:36 crc kubenswrapper[4667]: I0131 04:19:36.795961 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g788\" (UniqueName: \"kubernetes.io/projected/39f585ed-5556-4f88-b5c0-3b6da9807764-kube-api-access-5g788\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-972xp\" (UID: \"39f585ed-5556-4f88-b5c0-3b6da9807764\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-972xp" Jan 31 04:19:36 crc kubenswrapper[4667]: I0131 04:19:36.796027 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39f585ed-5556-4f88-b5c0-3b6da9807764-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-972xp\" (UID: \"39f585ed-5556-4f88-b5c0-3b6da9807764\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-972xp" Jan 31 04:19:36 crc kubenswrapper[4667]: I0131 04:19:36.796071 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/39f585ed-5556-4f88-b5c0-3b6da9807764-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-972xp\" (UID: \"39f585ed-5556-4f88-b5c0-3b6da9807764\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-972xp" Jan 31 04:19:36 crc kubenswrapper[4667]: I0131 04:19:36.801988 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/39f585ed-5556-4f88-b5c0-3b6da9807764-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-972xp\" (UID: \"39f585ed-5556-4f88-b5c0-3b6da9807764\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-972xp" Jan 31 04:19:36 crc kubenswrapper[4667]: I0131 04:19:36.804375 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39f585ed-5556-4f88-b5c0-3b6da9807764-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-972xp\" (UID: \"39f585ed-5556-4f88-b5c0-3b6da9807764\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-972xp" Jan 31 04:19:36 crc kubenswrapper[4667]: I0131 04:19:36.813776 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g788\" (UniqueName: \"kubernetes.io/projected/39f585ed-5556-4f88-b5c0-3b6da9807764-kube-api-access-5g788\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-972xp\" (UID: \"39f585ed-5556-4f88-b5c0-3b6da9807764\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-972xp" Jan 31 04:19:36 crc kubenswrapper[4667]: I0131 04:19:36.996623 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-972xp" Jan 31 04:19:37 crc kubenswrapper[4667]: I0131 04:19:37.624421 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-972xp"] Jan 31 04:19:38 crc kubenswrapper[4667]: I0131 04:19:38.523288 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-972xp" event={"ID":"39f585ed-5556-4f88-b5c0-3b6da9807764","Type":"ContainerStarted","Data":"399862598b5972ee557d9d22357cc3e99a0cb4a4f11e7261641d0c4ffd857ed8"} Jan 31 04:19:38 crc kubenswrapper[4667]: I0131 04:19:38.524095 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-972xp" event={"ID":"39f585ed-5556-4f88-b5c0-3b6da9807764","Type":"ContainerStarted","Data":"3bfaad661f6df9722635bc94c0dab43a292419e8d7e99e2fdc8a457e47df36c9"} Jan 31 04:19:38 crc kubenswrapper[4667]: I0131 04:19:38.543101 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-972xp" podStartSLOduration=2.089868148 podStartE2EDuration="2.543073099s" podCreationTimestamp="2026-01-31 04:19:36 +0000 UTC" firstStartedPulling="2026-01-31 04:19:37.623785465 +0000 UTC m=+1901.140120764" lastFinishedPulling="2026-01-31 04:19:38.076990376 +0000 UTC m=+1901.593325715" observedRunningTime="2026-01-31 04:19:38.539659168 +0000 UTC m=+1902.055994487" watchObservedRunningTime="2026-01-31 04:19:38.543073099 +0000 UTC m=+1902.059408408" Jan 31 04:19:43 crc kubenswrapper[4667]: I0131 04:19:43.583626 4667 generic.go:334] "Generic (PLEG): container finished" podID="39f585ed-5556-4f88-b5c0-3b6da9807764" containerID="399862598b5972ee557d9d22357cc3e99a0cb4a4f11e7261641d0c4ffd857ed8" exitCode=0 Jan 31 04:19:43 crc kubenswrapper[4667]: I0131 04:19:43.583718 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-972xp" event={"ID":"39f585ed-5556-4f88-b5c0-3b6da9807764","Type":"ContainerDied","Data":"399862598b5972ee557d9d22357cc3e99a0cb4a4f11e7261641d0c4ffd857ed8"} Jan 31 04:19:45 crc kubenswrapper[4667]: I0131 04:19:45.097708 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-972xp" Jan 31 04:19:45 crc kubenswrapper[4667]: I0131 04:19:45.122176 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39f585ed-5556-4f88-b5c0-3b6da9807764-inventory\") pod \"39f585ed-5556-4f88-b5c0-3b6da9807764\" (UID: \"39f585ed-5556-4f88-b5c0-3b6da9807764\") " Jan 31 04:19:45 crc kubenswrapper[4667]: I0131 04:19:45.122361 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5g788\" (UniqueName: \"kubernetes.io/projected/39f585ed-5556-4f88-b5c0-3b6da9807764-kube-api-access-5g788\") pod \"39f585ed-5556-4f88-b5c0-3b6da9807764\" (UID: \"39f585ed-5556-4f88-b5c0-3b6da9807764\") " Jan 31 04:19:45 crc kubenswrapper[4667]: I0131 04:19:45.122406 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/39f585ed-5556-4f88-b5c0-3b6da9807764-ssh-key-openstack-edpm-ipam\") pod \"39f585ed-5556-4f88-b5c0-3b6da9807764\" (UID: \"39f585ed-5556-4f88-b5c0-3b6da9807764\") " Jan 31 04:19:45 crc kubenswrapper[4667]: I0131 04:19:45.143374 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39f585ed-5556-4f88-b5c0-3b6da9807764-kube-api-access-5g788" (OuterVolumeSpecName: "kube-api-access-5g788") pod "39f585ed-5556-4f88-b5c0-3b6da9807764" (UID: "39f585ed-5556-4f88-b5c0-3b6da9807764"). InnerVolumeSpecName "kube-api-access-5g788". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:19:45 crc kubenswrapper[4667]: I0131 04:19:45.157116 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39f585ed-5556-4f88-b5c0-3b6da9807764-inventory" (OuterVolumeSpecName: "inventory") pod "39f585ed-5556-4f88-b5c0-3b6da9807764" (UID: "39f585ed-5556-4f88-b5c0-3b6da9807764"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:19:45 crc kubenswrapper[4667]: I0131 04:19:45.178316 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39f585ed-5556-4f88-b5c0-3b6da9807764-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "39f585ed-5556-4f88-b5c0-3b6da9807764" (UID: "39f585ed-5556-4f88-b5c0-3b6da9807764"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:19:45 crc kubenswrapper[4667]: I0131 04:19:45.224537 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5g788\" (UniqueName: \"kubernetes.io/projected/39f585ed-5556-4f88-b5c0-3b6da9807764-kube-api-access-5g788\") on node \"crc\" DevicePath \"\"" Jan 31 04:19:45 crc kubenswrapper[4667]: I0131 04:19:45.224583 4667 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/39f585ed-5556-4f88-b5c0-3b6da9807764-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:19:45 crc kubenswrapper[4667]: I0131 04:19:45.224596 4667 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/39f585ed-5556-4f88-b5c0-3b6da9807764-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 04:19:45 crc kubenswrapper[4667]: I0131 04:19:45.282616 4667 scope.go:117] "RemoveContainer" containerID="52796184d23595b846472c11c5dceaaa8d9b03476b3cbc4f47edf0ad21ac1e50" Jan 31 04:19:45 crc kubenswrapper[4667]: E0131 04:19:45.283235 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:19:45 crc kubenswrapper[4667]: I0131 04:19:45.608674 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-972xp" event={"ID":"39f585ed-5556-4f88-b5c0-3b6da9807764","Type":"ContainerDied","Data":"3bfaad661f6df9722635bc94c0dab43a292419e8d7e99e2fdc8a457e47df36c9"} Jan 31 04:19:45 crc kubenswrapper[4667]: I0131 04:19:45.608730 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bfaad661f6df9722635bc94c0dab43a292419e8d7e99e2fdc8a457e47df36c9" Jan 31 04:19:45 crc kubenswrapper[4667]: I0131 04:19:45.609199 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-972xp" Jan 31 04:19:45 crc kubenswrapper[4667]: I0131 04:19:45.735821 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-2xhw7"] Jan 31 04:19:45 crc kubenswrapper[4667]: E0131 04:19:45.736269 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39f585ed-5556-4f88-b5c0-3b6da9807764" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 31 04:19:45 crc kubenswrapper[4667]: I0131 04:19:45.736305 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="39f585ed-5556-4f88-b5c0-3b6da9807764" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 31 04:19:45 crc kubenswrapper[4667]: I0131 04:19:45.736535 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="39f585ed-5556-4f88-b5c0-3b6da9807764" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 31 04:19:45 crc kubenswrapper[4667]: I0131 04:19:45.737567 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2xhw7" Jan 31 04:19:45 crc kubenswrapper[4667]: I0131 04:19:45.741649 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 04:19:45 crc kubenswrapper[4667]: I0131 04:19:45.742213 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 04:19:45 crc kubenswrapper[4667]: I0131 04:19:45.742228 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 04:19:45 crc kubenswrapper[4667]: I0131 04:19:45.742365 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z7p2q" Jan 31 04:19:45 crc kubenswrapper[4667]: I0131 04:19:45.751961 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-2xhw7"] Jan 31 04:19:45 crc kubenswrapper[4667]: I0131 04:19:45.841287 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1426178-3085-452c-8da2-15a2bce73a55-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2xhw7\" (UID: \"c1426178-3085-452c-8da2-15a2bce73a55\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2xhw7" Jan 31 04:19:45 crc kubenswrapper[4667]: I0131 04:19:45.841414 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9zt6\" (UniqueName: \"kubernetes.io/projected/c1426178-3085-452c-8da2-15a2bce73a55-kube-api-access-g9zt6\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2xhw7\" (UID: \"c1426178-3085-452c-8da2-15a2bce73a55\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2xhw7" Jan 31 04:19:45 crc kubenswrapper[4667]: I0131 04:19:45.841597 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1426178-3085-452c-8da2-15a2bce73a55-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2xhw7\" (UID: \"c1426178-3085-452c-8da2-15a2bce73a55\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2xhw7" Jan 31 04:19:45 crc kubenswrapper[4667]: I0131 04:19:45.946758 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1426178-3085-452c-8da2-15a2bce73a55-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2xhw7\" (UID: \"c1426178-3085-452c-8da2-15a2bce73a55\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2xhw7" Jan 31 04:19:45 crc kubenswrapper[4667]: I0131 04:19:45.947201 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9zt6\" (UniqueName: \"kubernetes.io/projected/c1426178-3085-452c-8da2-15a2bce73a55-kube-api-access-g9zt6\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2xhw7\" (UID: \"c1426178-3085-452c-8da2-15a2bce73a55\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2xhw7" Jan 31 04:19:45 crc kubenswrapper[4667]: I0131 04:19:45.947390 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1426178-3085-452c-8da2-15a2bce73a55-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2xhw7\" (UID: \"c1426178-3085-452c-8da2-15a2bce73a55\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2xhw7" Jan 31 04:19:45 crc kubenswrapper[4667]: I0131 04:19:45.956474 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1426178-3085-452c-8da2-15a2bce73a55-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2xhw7\" (UID: \"c1426178-3085-452c-8da2-15a2bce73a55\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2xhw7" Jan 31 04:19:45 crc kubenswrapper[4667]: I0131 04:19:45.970604 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1426178-3085-452c-8da2-15a2bce73a55-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2xhw7\" (UID: \"c1426178-3085-452c-8da2-15a2bce73a55\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2xhw7" Jan 31 04:19:45 crc kubenswrapper[4667]: I0131 04:19:45.979105 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9zt6\" (UniqueName: \"kubernetes.io/projected/c1426178-3085-452c-8da2-15a2bce73a55-kube-api-access-g9zt6\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2xhw7\" (UID: \"c1426178-3085-452c-8da2-15a2bce73a55\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2xhw7" Jan 31 04:19:46 crc kubenswrapper[4667]: I0131 04:19:46.065597 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2xhw7" Jan 31 04:19:46 crc kubenswrapper[4667]: I0131 04:19:46.691679 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-2xhw7"] Jan 31 04:19:47 crc kubenswrapper[4667]: I0131 04:19:47.651542 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2xhw7" event={"ID":"c1426178-3085-452c-8da2-15a2bce73a55","Type":"ContainerStarted","Data":"7fb32ebc42ca1ccfa172ed922a6b693e91af784489655bfcbe6e385680e9d673"} Jan 31 04:19:48 crc kubenswrapper[4667]: I0131 04:19:48.665603 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2xhw7" event={"ID":"c1426178-3085-452c-8da2-15a2bce73a55","Type":"ContainerStarted","Data":"f0d7383fa1a6ead112c5a55161aac97790560956e361e1ad83cb36db9fbb6b6b"} Jan 31 04:19:48 crc kubenswrapper[4667]: I0131 04:19:48.709572 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2xhw7" podStartSLOduration=3.044219834 podStartE2EDuration="3.709540108s" podCreationTimestamp="2026-01-31 04:19:45 +0000 UTC" firstStartedPulling="2026-01-31 04:19:46.690820986 +0000 UTC m=+1910.207156325" lastFinishedPulling="2026-01-31 04:19:47.35614129 +0000 UTC m=+1910.872476599" observedRunningTime="2026-01-31 04:19:48.696566755 +0000 UTC m=+1912.212902094" watchObservedRunningTime="2026-01-31 04:19:48.709540108 +0000 UTC m=+1912.225875407" Jan 31 04:19:52 crc kubenswrapper[4667]: I0131 04:19:52.844508 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-8bff87d99-j8cd2" podUID="30fc5b26-45dd-42f8-9a58-7ba07c5aa56a" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 31 04:19:57 crc kubenswrapper[4667]: I0131 04:19:57.284241 4667 scope.go:117] "RemoveContainer" containerID="52796184d23595b846472c11c5dceaaa8d9b03476b3cbc4f47edf0ad21ac1e50" Jan 31 04:19:57 crc kubenswrapper[4667]: E0131 04:19:57.285405 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:20:00 crc kubenswrapper[4667]: I0131 04:20:00.050804 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bhwqz"] Jan 31 04:20:00 crc kubenswrapper[4667]: I0131 04:20:00.061438 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bhwqz"] Jan 31 04:20:01 crc kubenswrapper[4667]: I0131 04:20:01.296031 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78e0b2a7-8c04-43a3-86b7-d2406c2125c7" path="/var/lib/kubelet/pods/78e0b2a7-8c04-43a3-86b7-d2406c2125c7/volumes" Jan 31 04:20:11 crc kubenswrapper[4667]: I0131 04:20:11.282174 4667 scope.go:117] "RemoveContainer" containerID="52796184d23595b846472c11c5dceaaa8d9b03476b3cbc4f47edf0ad21ac1e50" Jan 31 04:20:11 crc kubenswrapper[4667]: E0131 04:20:11.283402 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:20:22 crc kubenswrapper[4667]: I0131 04:20:22.283934 4667 scope.go:117] "RemoveContainer" containerID="52796184d23595b846472c11c5dceaaa8d9b03476b3cbc4f47edf0ad21ac1e50" Jan 31 04:20:22 crc kubenswrapper[4667]: E0131 04:20:22.285045 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:20:25 crc kubenswrapper[4667]: I0131 04:20:25.095095 4667 scope.go:117] "RemoveContainer" containerID="e5369b5d2bf898d072daf88b8a2a3ad5ebb1d9b7470b0e202b2584d71f186765" Jan 31 04:20:27 crc kubenswrapper[4667]: I0131 04:20:27.111870 4667 generic.go:334] "Generic (PLEG): container finished" podID="c1426178-3085-452c-8da2-15a2bce73a55" containerID="f0d7383fa1a6ead112c5a55161aac97790560956e361e1ad83cb36db9fbb6b6b" exitCode=0 Jan 31 04:20:27 crc kubenswrapper[4667]: I0131 04:20:27.111966 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2xhw7" event={"ID":"c1426178-3085-452c-8da2-15a2bce73a55","Type":"ContainerDied","Data":"f0d7383fa1a6ead112c5a55161aac97790560956e361e1ad83cb36db9fbb6b6b"} Jan 31 04:20:28 crc kubenswrapper[4667]: I0131 04:20:28.841925 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2xhw7" Jan 31 04:20:28 crc kubenswrapper[4667]: I0131 04:20:28.876000 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9zt6\" (UniqueName: \"kubernetes.io/projected/c1426178-3085-452c-8da2-15a2bce73a55-kube-api-access-g9zt6\") pod \"c1426178-3085-452c-8da2-15a2bce73a55\" (UID: \"c1426178-3085-452c-8da2-15a2bce73a55\") " Jan 31 04:20:28 crc kubenswrapper[4667]: I0131 04:20:28.876176 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1426178-3085-452c-8da2-15a2bce73a55-inventory\") pod \"c1426178-3085-452c-8da2-15a2bce73a55\" (UID: \"c1426178-3085-452c-8da2-15a2bce73a55\") " Jan 31 04:20:28 crc kubenswrapper[4667]: I0131 04:20:28.876250 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1426178-3085-452c-8da2-15a2bce73a55-ssh-key-openstack-edpm-ipam\") pod \"c1426178-3085-452c-8da2-15a2bce73a55\" (UID: \"c1426178-3085-452c-8da2-15a2bce73a55\") " Jan 31 04:20:28 crc kubenswrapper[4667]: I0131 04:20:28.886600 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1426178-3085-452c-8da2-15a2bce73a55-kube-api-access-g9zt6" (OuterVolumeSpecName: "kube-api-access-g9zt6") pod "c1426178-3085-452c-8da2-15a2bce73a55" (UID: "c1426178-3085-452c-8da2-15a2bce73a55"). InnerVolumeSpecName "kube-api-access-g9zt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:20:28 crc kubenswrapper[4667]: I0131 04:20:28.913493 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1426178-3085-452c-8da2-15a2bce73a55-inventory" (OuterVolumeSpecName: "inventory") pod "c1426178-3085-452c-8da2-15a2bce73a55" (UID: "c1426178-3085-452c-8da2-15a2bce73a55"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:20:28 crc kubenswrapper[4667]: I0131 04:20:28.927214 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1426178-3085-452c-8da2-15a2bce73a55-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c1426178-3085-452c-8da2-15a2bce73a55" (UID: "c1426178-3085-452c-8da2-15a2bce73a55"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:20:28 crc kubenswrapper[4667]: I0131 04:20:28.980166 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9zt6\" (UniqueName: \"kubernetes.io/projected/c1426178-3085-452c-8da2-15a2bce73a55-kube-api-access-g9zt6\") on node \"crc\" DevicePath \"\"" Jan 31 04:20:28 crc kubenswrapper[4667]: I0131 04:20:28.980391 4667 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1426178-3085-452c-8da2-15a2bce73a55-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 04:20:28 crc kubenswrapper[4667]: I0131 04:20:28.980506 4667 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1426178-3085-452c-8da2-15a2bce73a55-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:20:29 crc kubenswrapper[4667]: I0131 04:20:29.140747 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2xhw7" event={"ID":"c1426178-3085-452c-8da2-15a2bce73a55","Type":"ContainerDied","Data":"7fb32ebc42ca1ccfa172ed922a6b693e91af784489655bfcbe6e385680e9d673"} Jan 31 04:20:29 crc kubenswrapper[4667]: I0131 04:20:29.140817 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fb32ebc42ca1ccfa172ed922a6b693e91af784489655bfcbe6e385680e9d673" Jan 31 04:20:29 crc kubenswrapper[4667]: I0131 04:20:29.140937 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2xhw7" Jan 31 04:20:29 crc kubenswrapper[4667]: I0131 04:20:29.269525 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6j7k8"] Jan 31 04:20:29 crc kubenswrapper[4667]: E0131 04:20:29.270856 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1426178-3085-452c-8da2-15a2bce73a55" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 31 04:20:29 crc kubenswrapper[4667]: I0131 04:20:29.270879 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1426178-3085-452c-8da2-15a2bce73a55" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 31 04:20:29 crc kubenswrapper[4667]: I0131 04:20:29.271158 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1426178-3085-452c-8da2-15a2bce73a55" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 31 04:20:29 crc kubenswrapper[4667]: I0131 04:20:29.272211 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6j7k8" Jan 31 04:20:29 crc kubenswrapper[4667]: I0131 04:20:29.283420 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 04:20:29 crc kubenswrapper[4667]: I0131 04:20:29.283705 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z7p2q" Jan 31 04:20:29 crc kubenswrapper[4667]: I0131 04:20:29.283981 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 04:20:29 crc kubenswrapper[4667]: I0131 04:20:29.284154 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 04:20:29 crc kubenswrapper[4667]: I0131 04:20:29.295001 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6j7k8"] Jan 31 04:20:29 crc kubenswrapper[4667]: I0131 04:20:29.396463 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r8sq\" (UniqueName: \"kubernetes.io/projected/f2ba4344-86fc-4f0f-86ed-7daec27549ec-kube-api-access-8r8sq\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6j7k8\" (UID: \"f2ba4344-86fc-4f0f-86ed-7daec27549ec\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6j7k8" Jan 31 04:20:29 crc kubenswrapper[4667]: I0131 04:20:29.396615 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f2ba4344-86fc-4f0f-86ed-7daec27549ec-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6j7k8\" (UID: \"f2ba4344-86fc-4f0f-86ed-7daec27549ec\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6j7k8" Jan 31 04:20:29 crc kubenswrapper[4667]: I0131 04:20:29.396650 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f2ba4344-86fc-4f0f-86ed-7daec27549ec-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6j7k8\" (UID: \"f2ba4344-86fc-4f0f-86ed-7daec27549ec\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6j7k8" Jan 31 04:20:29 crc kubenswrapper[4667]: I0131 04:20:29.498341 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f2ba4344-86fc-4f0f-86ed-7daec27549ec-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6j7k8\" (UID: \"f2ba4344-86fc-4f0f-86ed-7daec27549ec\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6j7k8" Jan 31 04:20:29 crc kubenswrapper[4667]: I0131 04:20:29.498700 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f2ba4344-86fc-4f0f-86ed-7daec27549ec-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6j7k8\" (UID: \"f2ba4344-86fc-4f0f-86ed-7daec27549ec\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6j7k8" Jan 31 04:20:29 crc kubenswrapper[4667]: I0131 04:20:29.498915 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r8sq\" (UniqueName: \"kubernetes.io/projected/f2ba4344-86fc-4f0f-86ed-7daec27549ec-kube-api-access-8r8sq\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6j7k8\" (UID: \"f2ba4344-86fc-4f0f-86ed-7daec27549ec\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6j7k8" Jan 31 04:20:29 crc kubenswrapper[4667]: I0131 04:20:29.506061 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f2ba4344-86fc-4f0f-86ed-7daec27549ec-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6j7k8\" (UID: \"f2ba4344-86fc-4f0f-86ed-7daec27549ec\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6j7k8" Jan 31 04:20:29 crc kubenswrapper[4667]: I0131 04:20:29.506265 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f2ba4344-86fc-4f0f-86ed-7daec27549ec-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6j7k8\" (UID: \"f2ba4344-86fc-4f0f-86ed-7daec27549ec\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6j7k8" Jan 31 04:20:29 crc kubenswrapper[4667]: I0131 04:20:29.523282 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r8sq\" (UniqueName: \"kubernetes.io/projected/f2ba4344-86fc-4f0f-86ed-7daec27549ec-kube-api-access-8r8sq\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6j7k8\" (UID: \"f2ba4344-86fc-4f0f-86ed-7daec27549ec\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6j7k8" Jan 31 04:20:29 crc kubenswrapper[4667]: I0131 04:20:29.590861 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6j7k8" Jan 31 04:20:30 crc kubenswrapper[4667]: I0131 04:20:30.074679 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-ggfpz"] Jan 31 04:20:30 crc kubenswrapper[4667]: I0131 04:20:30.092909 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-ggfpz"] Jan 31 04:20:30 crc kubenswrapper[4667]: I0131 04:20:30.197016 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6j7k8"] Jan 31 04:20:31 crc kubenswrapper[4667]: I0131 04:20:31.159695 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6j7k8" event={"ID":"f2ba4344-86fc-4f0f-86ed-7daec27549ec","Type":"ContainerStarted","Data":"1cfa431dc6f8ccab60bdf42785562e3acdafb5770aca4db3fc8f8f0b838d9c63"} Jan 31 04:20:31 crc kubenswrapper[4667]: I0131 04:20:31.160162 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6j7k8" event={"ID":"f2ba4344-86fc-4f0f-86ed-7daec27549ec","Type":"ContainerStarted","Data":"5b78b1a2b53a21123310a2be68009297f5d9354818706a91c3a0c49989fa80ec"} Jan 31 04:20:31 crc kubenswrapper[4667]: I0131 04:20:31.192199 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6j7k8" podStartSLOduration=1.657482281 podStartE2EDuration="2.192171778s" podCreationTimestamp="2026-01-31 04:20:29 +0000 UTC" firstStartedPulling="2026-01-31 04:20:30.200412967 +0000 UTC m=+1953.716748276" lastFinishedPulling="2026-01-31 04:20:30.735102474 +0000 UTC m=+1954.251437773" observedRunningTime="2026-01-31 04:20:31.18470537 +0000 UTC m=+1954.701040689" watchObservedRunningTime="2026-01-31 04:20:31.192171778 +0000 UTC m=+1954.708507087" Jan 31 04:20:31 crc kubenswrapper[4667]: I0131 04:20:31.296368 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87224b26-43eb-4712-bef1-050a0653fb28" path="/var/lib/kubelet/pods/87224b26-43eb-4712-bef1-050a0653fb28/volumes" Jan 31 04:20:32 crc kubenswrapper[4667]: I0131 04:20:32.046036 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-w94fr"] Jan 31 04:20:32 crc kubenswrapper[4667]: I0131 04:20:32.056895 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-w94fr"] Jan 31 04:20:33 crc kubenswrapper[4667]: I0131 04:20:33.295635 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed98e28a-5baf-4f7a-aafb-b03916785619" path="/var/lib/kubelet/pods/ed98e28a-5baf-4f7a-aafb-b03916785619/volumes" Jan 31 04:20:37 crc kubenswrapper[4667]: I0131 04:20:37.291239 4667 scope.go:117] "RemoveContainer" containerID="52796184d23595b846472c11c5dceaaa8d9b03476b3cbc4f47edf0ad21ac1e50" Jan 31 04:20:37 crc kubenswrapper[4667]: E0131 04:20:37.292638 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:20:48 crc kubenswrapper[4667]: I0131 04:20:48.292801 4667 scope.go:117] "RemoveContainer" containerID="52796184d23595b846472c11c5dceaaa8d9b03476b3cbc4f47edf0ad21ac1e50" Jan 31 04:20:48 crc kubenswrapper[4667]: E0131 04:20:48.293717 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:21:00 crc kubenswrapper[4667]: I0131 04:21:00.283620 4667 scope.go:117] "RemoveContainer" containerID="52796184d23595b846472c11c5dceaaa8d9b03476b3cbc4f47edf0ad21ac1e50" Jan 31 04:21:00 crc kubenswrapper[4667]: E0131 04:21:00.285392 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:21:12 crc kubenswrapper[4667]: I0131 04:21:12.282397 4667 scope.go:117] "RemoveContainer" containerID="52796184d23595b846472c11c5dceaaa8d9b03476b3cbc4f47edf0ad21ac1e50" Jan 31 04:21:12 crc kubenswrapper[4667]: E0131 04:21:12.283628 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:21:14 crc kubenswrapper[4667]: I0131 04:21:14.049518 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-67h7k"] Jan 31 04:21:14 crc kubenswrapper[4667]: I0131 04:21:14.063382 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-67h7k"] Jan 31 04:21:15 crc kubenswrapper[4667]: I0131 04:21:15.295417 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9676c6cd-275c-4aaa-86b6-cdcca7df370e" path="/var/lib/kubelet/pods/9676c6cd-275c-4aaa-86b6-cdcca7df370e/volumes" Jan 31 04:21:22 crc kubenswrapper[4667]: I0131 04:21:22.705211 4667 generic.go:334] "Generic (PLEG): container finished" podID="f2ba4344-86fc-4f0f-86ed-7daec27549ec" containerID="1cfa431dc6f8ccab60bdf42785562e3acdafb5770aca4db3fc8f8f0b838d9c63" exitCode=0 Jan 31 04:21:22 crc kubenswrapper[4667]: I0131 04:21:22.705356 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6j7k8" event={"ID":"f2ba4344-86fc-4f0f-86ed-7daec27549ec","Type":"ContainerDied","Data":"1cfa431dc6f8ccab60bdf42785562e3acdafb5770aca4db3fc8f8f0b838d9c63"} Jan 31 04:21:24 crc kubenswrapper[4667]: I0131 04:21:24.145376 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6j7k8" Jan 31 04:21:24 crc kubenswrapper[4667]: I0131 04:21:24.252614 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f2ba4344-86fc-4f0f-86ed-7daec27549ec-ssh-key-openstack-edpm-ipam\") pod \"f2ba4344-86fc-4f0f-86ed-7daec27549ec\" (UID: \"f2ba4344-86fc-4f0f-86ed-7daec27549ec\") " Jan 31 04:21:24 crc kubenswrapper[4667]: I0131 04:21:24.252668 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8r8sq\" (UniqueName: \"kubernetes.io/projected/f2ba4344-86fc-4f0f-86ed-7daec27549ec-kube-api-access-8r8sq\") pod \"f2ba4344-86fc-4f0f-86ed-7daec27549ec\" (UID: \"f2ba4344-86fc-4f0f-86ed-7daec27549ec\") " Jan 31 04:21:24 crc kubenswrapper[4667]: I0131 04:21:24.252787 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f2ba4344-86fc-4f0f-86ed-7daec27549ec-inventory\") pod \"f2ba4344-86fc-4f0f-86ed-7daec27549ec\" (UID: \"f2ba4344-86fc-4f0f-86ed-7daec27549ec\") " Jan 31 04:21:24 crc kubenswrapper[4667]: I0131 04:21:24.263045 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2ba4344-86fc-4f0f-86ed-7daec27549ec-kube-api-access-8r8sq" (OuterVolumeSpecName: "kube-api-access-8r8sq") pod "f2ba4344-86fc-4f0f-86ed-7daec27549ec" (UID: "f2ba4344-86fc-4f0f-86ed-7daec27549ec"). InnerVolumeSpecName "kube-api-access-8r8sq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:21:24 crc kubenswrapper[4667]: I0131 04:21:24.295975 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2ba4344-86fc-4f0f-86ed-7daec27549ec-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f2ba4344-86fc-4f0f-86ed-7daec27549ec" (UID: "f2ba4344-86fc-4f0f-86ed-7daec27549ec"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:21:24 crc kubenswrapper[4667]: I0131 04:21:24.308045 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2ba4344-86fc-4f0f-86ed-7daec27549ec-inventory" (OuterVolumeSpecName: "inventory") pod "f2ba4344-86fc-4f0f-86ed-7daec27549ec" (UID: "f2ba4344-86fc-4f0f-86ed-7daec27549ec"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:21:24 crc kubenswrapper[4667]: I0131 04:21:24.355520 4667 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f2ba4344-86fc-4f0f-86ed-7daec27549ec-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:21:24 crc kubenswrapper[4667]: I0131 04:21:24.355554 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8r8sq\" (UniqueName: \"kubernetes.io/projected/f2ba4344-86fc-4f0f-86ed-7daec27549ec-kube-api-access-8r8sq\") on node \"crc\" DevicePath \"\"" Jan 31 04:21:24 crc kubenswrapper[4667]: I0131 04:21:24.355566 4667 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f2ba4344-86fc-4f0f-86ed-7daec27549ec-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 04:21:24 crc kubenswrapper[4667]: I0131 04:21:24.741517 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6j7k8" event={"ID":"f2ba4344-86fc-4f0f-86ed-7daec27549ec","Type":"ContainerDied","Data":"5b78b1a2b53a21123310a2be68009297f5d9354818706a91c3a0c49989fa80ec"} Jan 31 04:21:24 crc kubenswrapper[4667]: I0131 04:21:24.741585 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b78b1a2b53a21123310a2be68009297f5d9354818706a91c3a0c49989fa80ec" Jan 31 04:21:24 crc kubenswrapper[4667]: I0131 04:21:24.741676 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6j7k8" Jan 31 04:21:24 crc kubenswrapper[4667]: I0131 04:21:24.857807 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-h2dbq"] Jan 31 04:21:24 crc kubenswrapper[4667]: E0131 04:21:24.858358 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2ba4344-86fc-4f0f-86ed-7daec27549ec" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 31 04:21:24 crc kubenswrapper[4667]: I0131 04:21:24.858372 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2ba4344-86fc-4f0f-86ed-7daec27549ec" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 31 04:21:24 crc kubenswrapper[4667]: I0131 04:21:24.858549 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2ba4344-86fc-4f0f-86ed-7daec27549ec" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 31 04:21:24 crc kubenswrapper[4667]: I0131 04:21:24.886748 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-h2dbq" Jan 31 04:21:24 crc kubenswrapper[4667]: I0131 04:21:24.890969 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z7p2q" Jan 31 04:21:24 crc kubenswrapper[4667]: I0131 04:21:24.891831 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 04:21:24 crc kubenswrapper[4667]: I0131 04:21:24.895826 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 04:21:24 crc kubenswrapper[4667]: I0131 04:21:24.896288 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-h2dbq"] Jan 31 04:21:24 crc kubenswrapper[4667]: I0131 04:21:24.897420 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 04:21:25 crc kubenswrapper[4667]: I0131 04:21:25.125955 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-h2dbq\" (UID: \"5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8\") " pod="openstack/ssh-known-hosts-edpm-deployment-h2dbq" Jan 31 04:21:25 crc kubenswrapper[4667]: I0131 04:21:25.126025 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-h2dbq\" (UID: \"5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8\") " pod="openstack/ssh-known-hosts-edpm-deployment-h2dbq" Jan 31 04:21:25 crc kubenswrapper[4667]: I0131 04:21:25.126862 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c25j6\" (UniqueName: \"kubernetes.io/projected/5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8-kube-api-access-c25j6\") pod \"ssh-known-hosts-edpm-deployment-h2dbq\" (UID: \"5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8\") " pod="openstack/ssh-known-hosts-edpm-deployment-h2dbq" Jan 31 04:21:25 crc kubenswrapper[4667]: I0131 04:21:25.176553 4667 scope.go:117] "RemoveContainer" containerID="6fdabdeaf9bb42c59ece3a6e37bd433f307b6b07f41f108176a8a37e0018d820" Jan 31 04:21:25 crc kubenswrapper[4667]: I0131 04:21:25.230478 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c25j6\" (UniqueName: \"kubernetes.io/projected/5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8-kube-api-access-c25j6\") pod \"ssh-known-hosts-edpm-deployment-h2dbq\" (UID: \"5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8\") " pod="openstack/ssh-known-hosts-edpm-deployment-h2dbq" Jan 31 04:21:25 crc kubenswrapper[4667]: I0131 04:21:25.230605 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-h2dbq\" (UID: \"5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8\") " pod="openstack/ssh-known-hosts-edpm-deployment-h2dbq" Jan 31 04:21:25 crc kubenswrapper[4667]: I0131 04:21:25.230641 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-h2dbq\" (UID: \"5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8\") " pod="openstack/ssh-known-hosts-edpm-deployment-h2dbq" Jan 31 04:21:25 crc kubenswrapper[4667]: I0131 04:21:25.239501 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-h2dbq\" (UID: \"5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8\") " pod="openstack/ssh-known-hosts-edpm-deployment-h2dbq" Jan 31 04:21:25 crc kubenswrapper[4667]: I0131 04:21:25.240430 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-h2dbq\" (UID: \"5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8\") " pod="openstack/ssh-known-hosts-edpm-deployment-h2dbq" Jan 31 04:21:25 crc kubenswrapper[4667]: I0131 04:21:25.245541 4667 scope.go:117] "RemoveContainer" containerID="56a297688de57dc27497a4f04bd410ffa1654c2e35b421ce26c69749805b19b3" Jan 31 04:21:25 crc kubenswrapper[4667]: I0131 04:21:25.250384 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c25j6\" (UniqueName: \"kubernetes.io/projected/5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8-kube-api-access-c25j6\") pod \"ssh-known-hosts-edpm-deployment-h2dbq\" (UID: \"5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8\") " pod="openstack/ssh-known-hosts-edpm-deployment-h2dbq" Jan 31 04:21:25 crc kubenswrapper[4667]: I0131 04:21:25.380491 4667 scope.go:117] "RemoveContainer" containerID="38f08cd67983bd4889429f7af6614e1b16f81cb85fa950741834b9eed02dbd53" Jan 31 04:21:25 crc kubenswrapper[4667]: I0131 04:21:25.517904 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-h2dbq" Jan 31 04:21:26 crc kubenswrapper[4667]: I0131 04:21:26.078389 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-h2dbq"] Jan 31 04:21:26 crc kubenswrapper[4667]: I0131 04:21:26.281957 4667 scope.go:117] "RemoveContainer" containerID="52796184d23595b846472c11c5dceaaa8d9b03476b3cbc4f47edf0ad21ac1e50" Jan 31 04:21:26 crc kubenswrapper[4667]: I0131 04:21:26.776507 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" event={"ID":"b103bbd2-fb5d-4b2a-8b01-c32f699757df","Type":"ContainerStarted","Data":"ef1eb767bd280ddd0c820466d361f32d61a4b5810e115e81b09dba157f785aad"} Jan 31 04:21:26 crc kubenswrapper[4667]: I0131 04:21:26.782020 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-h2dbq" event={"ID":"5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8","Type":"ContainerStarted","Data":"9db3399cf318745112abcc140fe34e875a5893d54ca70f91fbf90b8b1cfdffda"} Jan 31 04:21:27 crc kubenswrapper[4667]: I0131 04:21:27.801362 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-h2dbq" event={"ID":"5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8","Type":"ContainerStarted","Data":"75c17218ed882d34de7c45289b4ad834247550c7bfe2823a14e19dcb5883644a"} Jan 31 04:21:27 crc kubenswrapper[4667]: I0131 04:21:27.831503 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-h2dbq" podStartSLOduration=3.339082792 podStartE2EDuration="3.831484151s" podCreationTimestamp="2026-01-31 04:21:24 +0000 UTC" firstStartedPulling="2026-01-31 04:21:26.083369317 +0000 UTC m=+2009.599704646" lastFinishedPulling="2026-01-31 04:21:26.575770706 +0000 UTC m=+2010.092106005" observedRunningTime="2026-01-31 04:21:27.827035143 +0000 UTC m=+2011.343370442" watchObservedRunningTime="2026-01-31 04:21:27.831484151 +0000 UTC m=+2011.347819450" Jan 31 04:21:34 crc kubenswrapper[4667]: I0131 04:21:34.882818 4667 generic.go:334] "Generic (PLEG): container finished" podID="5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8" containerID="75c17218ed882d34de7c45289b4ad834247550c7bfe2823a14e19dcb5883644a" exitCode=0 Jan 31 04:21:34 crc kubenswrapper[4667]: I0131 04:21:34.882898 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-h2dbq" event={"ID":"5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8","Type":"ContainerDied","Data":"75c17218ed882d34de7c45289b4ad834247550c7bfe2823a14e19dcb5883644a"} Jan 31 04:21:36 crc kubenswrapper[4667]: I0131 04:21:36.373863 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-h2dbq" Jan 31 04:21:36 crc kubenswrapper[4667]: I0131 04:21:36.428928 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8-ssh-key-openstack-edpm-ipam\") pod \"5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8\" (UID: \"5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8\") " Jan 31 04:21:36 crc kubenswrapper[4667]: I0131 04:21:36.429079 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c25j6\" (UniqueName: \"kubernetes.io/projected/5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8-kube-api-access-c25j6\") pod \"5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8\" (UID: \"5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8\") " Jan 31 04:21:36 crc kubenswrapper[4667]: I0131 04:21:36.429140 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8-inventory-0\") pod \"5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8\" (UID: \"5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8\") " Jan 31 04:21:36 crc kubenswrapper[4667]: I0131 04:21:36.452908 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8-kube-api-access-c25j6" (OuterVolumeSpecName: "kube-api-access-c25j6") pod "5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8" (UID: "5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8"). InnerVolumeSpecName "kube-api-access-c25j6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:21:36 crc kubenswrapper[4667]: I0131 04:21:36.459293 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8" (UID: "5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:21:36 crc kubenswrapper[4667]: I0131 04:21:36.485025 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8" (UID: "5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:21:36 crc kubenswrapper[4667]: I0131 04:21:36.532464 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c25j6\" (UniqueName: \"kubernetes.io/projected/5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8-kube-api-access-c25j6\") on node \"crc\" DevicePath \"\"" Jan 31 04:21:36 crc kubenswrapper[4667]: I0131 04:21:36.532529 4667 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 31 04:21:36 crc kubenswrapper[4667]: I0131 04:21:36.532545 4667 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:21:36 crc kubenswrapper[4667]: I0131 04:21:36.909542 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-h2dbq" event={"ID":"5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8","Type":"ContainerDied","Data":"9db3399cf318745112abcc140fe34e875a5893d54ca70f91fbf90b8b1cfdffda"} Jan 31 04:21:36 crc kubenswrapper[4667]: I0131 04:21:36.910053 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9db3399cf318745112abcc140fe34e875a5893d54ca70f91fbf90b8b1cfdffda" Jan 31 04:21:36 crc kubenswrapper[4667]: I0131 04:21:36.909762 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-h2dbq" Jan 31 04:21:37 crc kubenswrapper[4667]: I0131 04:21:37.025458 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8fd84"] Jan 31 04:21:37 crc kubenswrapper[4667]: E0131 04:21:37.026001 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8" containerName="ssh-known-hosts-edpm-deployment" Jan 31 04:21:37 crc kubenswrapper[4667]: I0131 04:21:37.026026 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8" containerName="ssh-known-hosts-edpm-deployment" Jan 31 04:21:37 crc kubenswrapper[4667]: I0131 04:21:37.026256 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8" containerName="ssh-known-hosts-edpm-deployment" Jan 31 04:21:37 crc kubenswrapper[4667]: I0131 04:21:37.027597 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8fd84" Jan 31 04:21:37 crc kubenswrapper[4667]: I0131 04:21:37.032450 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 04:21:37 crc kubenswrapper[4667]: I0131 04:21:37.032645 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z7p2q" Jan 31 04:21:37 crc kubenswrapper[4667]: I0131 04:21:37.032650 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 04:21:37 crc kubenswrapper[4667]: I0131 04:21:37.033136 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 04:21:37 crc kubenswrapper[4667]: I0131 04:21:37.043764 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbv26\" (UniqueName: \"kubernetes.io/projected/10997808-cd78-4267-b7a3-7ea36b948a60-kube-api-access-rbv26\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8fd84\" (UID: \"10997808-cd78-4267-b7a3-7ea36b948a60\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8fd84" Jan 31 04:21:37 crc kubenswrapper[4667]: I0131 04:21:37.043870 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10997808-cd78-4267-b7a3-7ea36b948a60-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8fd84\" (UID: \"10997808-cd78-4267-b7a3-7ea36b948a60\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8fd84" Jan 31 04:21:37 crc kubenswrapper[4667]: I0131 04:21:37.043952 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/10997808-cd78-4267-b7a3-7ea36b948a60-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8fd84\" (UID: \"10997808-cd78-4267-b7a3-7ea36b948a60\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8fd84" Jan 31 04:21:37 crc kubenswrapper[4667]: I0131 04:21:37.043962 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8fd84"] Jan 31 04:21:37 crc kubenswrapper[4667]: I0131 04:21:37.145896 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/10997808-cd78-4267-b7a3-7ea36b948a60-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8fd84\" (UID: \"10997808-cd78-4267-b7a3-7ea36b948a60\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8fd84" Jan 31 04:21:37 crc kubenswrapper[4667]: I0131 04:21:37.146368 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbv26\" (UniqueName: \"kubernetes.io/projected/10997808-cd78-4267-b7a3-7ea36b948a60-kube-api-access-rbv26\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8fd84\" (UID: \"10997808-cd78-4267-b7a3-7ea36b948a60\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8fd84" Jan 31 04:21:37 crc kubenswrapper[4667]: I0131 04:21:37.146526 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10997808-cd78-4267-b7a3-7ea36b948a60-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8fd84\" (UID: \"10997808-cd78-4267-b7a3-7ea36b948a60\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8fd84" Jan 31 04:21:37 crc kubenswrapper[4667]: I0131 04:21:37.152572 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/10997808-cd78-4267-b7a3-7ea36b948a60-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8fd84\" (UID: \"10997808-cd78-4267-b7a3-7ea36b948a60\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8fd84" Jan 31 04:21:37 crc kubenswrapper[4667]: I0131 04:21:37.160397 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10997808-cd78-4267-b7a3-7ea36b948a60-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8fd84\" (UID: \"10997808-cd78-4267-b7a3-7ea36b948a60\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8fd84" Jan 31 04:21:37 crc kubenswrapper[4667]: I0131 04:21:37.168029 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbv26\" (UniqueName: \"kubernetes.io/projected/10997808-cd78-4267-b7a3-7ea36b948a60-kube-api-access-rbv26\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8fd84\" (UID: \"10997808-cd78-4267-b7a3-7ea36b948a60\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8fd84" Jan 31 04:21:37 crc kubenswrapper[4667]: I0131 04:21:37.350883 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8fd84" Jan 31 04:21:37 crc kubenswrapper[4667]: I0131 04:21:37.931805 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8fd84"] Jan 31 04:21:37 crc kubenswrapper[4667]: I0131 04:21:37.967030 4667 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 04:21:38 crc kubenswrapper[4667]: I0131 04:21:38.945104 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8fd84" event={"ID":"10997808-cd78-4267-b7a3-7ea36b948a60","Type":"ContainerStarted","Data":"1654e2e6925a884c3f4848db93fe1bb7b9662cbd7952782df2225cd217c26d87"} Jan 31 04:21:38 crc kubenswrapper[4667]: I0131 04:21:38.945623 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8fd84" event={"ID":"10997808-cd78-4267-b7a3-7ea36b948a60","Type":"ContainerStarted","Data":"f32a14fa21f00eae1e0db0b3d83609184558b361dcbdd03e15000e657a4396fa"} Jan 31 04:21:38 crc kubenswrapper[4667]: I0131 04:21:38.985282 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8fd84" podStartSLOduration=2.554895647 podStartE2EDuration="2.985055518s" podCreationTimestamp="2026-01-31 04:21:36 +0000 UTC" firstStartedPulling="2026-01-31 04:21:37.966759615 +0000 UTC m=+2021.483094914" lastFinishedPulling="2026-01-31 04:21:38.396919466 +0000 UTC m=+2021.913254785" observedRunningTime="2026-01-31 04:21:38.97267872 +0000 UTC m=+2022.489014019" watchObservedRunningTime="2026-01-31 04:21:38.985055518 +0000 UTC m=+2022.501390817" Jan 31 04:21:47 crc kubenswrapper[4667]: I0131 04:21:47.052222 4667 generic.go:334] "Generic (PLEG): container finished" podID="10997808-cd78-4267-b7a3-7ea36b948a60" containerID="1654e2e6925a884c3f4848db93fe1bb7b9662cbd7952782df2225cd217c26d87" exitCode=0 Jan 31 04:21:47 crc kubenswrapper[4667]: I0131 04:21:47.052320 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8fd84" event={"ID":"10997808-cd78-4267-b7a3-7ea36b948a60","Type":"ContainerDied","Data":"1654e2e6925a884c3f4848db93fe1bb7b9662cbd7952782df2225cd217c26d87"} Jan 31 04:21:48 crc kubenswrapper[4667]: I0131 04:21:48.604070 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8fd84" Jan 31 04:21:48 crc kubenswrapper[4667]: I0131 04:21:48.616137 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbv26\" (UniqueName: \"kubernetes.io/projected/10997808-cd78-4267-b7a3-7ea36b948a60-kube-api-access-rbv26\") pod \"10997808-cd78-4267-b7a3-7ea36b948a60\" (UID: \"10997808-cd78-4267-b7a3-7ea36b948a60\") " Jan 31 04:21:48 crc kubenswrapper[4667]: I0131 04:21:48.616203 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10997808-cd78-4267-b7a3-7ea36b948a60-inventory\") pod \"10997808-cd78-4267-b7a3-7ea36b948a60\" (UID: \"10997808-cd78-4267-b7a3-7ea36b948a60\") " Jan 31 04:21:48 crc kubenswrapper[4667]: I0131 04:21:48.616242 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/10997808-cd78-4267-b7a3-7ea36b948a60-ssh-key-openstack-edpm-ipam\") pod \"10997808-cd78-4267-b7a3-7ea36b948a60\" (UID: \"10997808-cd78-4267-b7a3-7ea36b948a60\") " Jan 31 04:21:48 crc kubenswrapper[4667]: I0131 04:21:48.631581 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10997808-cd78-4267-b7a3-7ea36b948a60-kube-api-access-rbv26" (OuterVolumeSpecName: "kube-api-access-rbv26") pod "10997808-cd78-4267-b7a3-7ea36b948a60" (UID: "10997808-cd78-4267-b7a3-7ea36b948a60"). InnerVolumeSpecName "kube-api-access-rbv26". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:21:48 crc kubenswrapper[4667]: I0131 04:21:48.646208 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10997808-cd78-4267-b7a3-7ea36b948a60-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "10997808-cd78-4267-b7a3-7ea36b948a60" (UID: "10997808-cd78-4267-b7a3-7ea36b948a60"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:21:48 crc kubenswrapper[4667]: I0131 04:21:48.662957 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10997808-cd78-4267-b7a3-7ea36b948a60-inventory" (OuterVolumeSpecName: "inventory") pod "10997808-cd78-4267-b7a3-7ea36b948a60" (UID: "10997808-cd78-4267-b7a3-7ea36b948a60"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:21:48 crc kubenswrapper[4667]: I0131 04:21:48.718951 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbv26\" (UniqueName: \"kubernetes.io/projected/10997808-cd78-4267-b7a3-7ea36b948a60-kube-api-access-rbv26\") on node \"crc\" DevicePath \"\"" Jan 31 04:21:48 crc kubenswrapper[4667]: I0131 04:21:48.719015 4667 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10997808-cd78-4267-b7a3-7ea36b948a60-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 04:21:48 crc kubenswrapper[4667]: I0131 04:21:48.719030 4667 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/10997808-cd78-4267-b7a3-7ea36b948a60-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:21:49 crc kubenswrapper[4667]: I0131 04:21:49.079541 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8fd84" event={"ID":"10997808-cd78-4267-b7a3-7ea36b948a60","Type":"ContainerDied","Data":"f32a14fa21f00eae1e0db0b3d83609184558b361dcbdd03e15000e657a4396fa"} Jan 31 04:21:49 crc kubenswrapper[4667]: I0131 04:21:49.080223 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f32a14fa21f00eae1e0db0b3d83609184558b361dcbdd03e15000e657a4396fa" Jan 31 04:21:49 crc kubenswrapper[4667]: I0131 04:21:49.080338 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8fd84" Jan 31 04:21:49 crc kubenswrapper[4667]: I0131 04:21:49.212245 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xf5nb"] Jan 31 04:21:49 crc kubenswrapper[4667]: E0131 04:21:49.212782 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10997808-cd78-4267-b7a3-7ea36b948a60" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 31 04:21:49 crc kubenswrapper[4667]: I0131 04:21:49.212808 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="10997808-cd78-4267-b7a3-7ea36b948a60" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 31 04:21:49 crc kubenswrapper[4667]: I0131 04:21:49.213427 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="10997808-cd78-4267-b7a3-7ea36b948a60" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 31 04:21:49 crc kubenswrapper[4667]: I0131 04:21:49.214580 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xf5nb" Jan 31 04:21:49 crc kubenswrapper[4667]: I0131 04:21:49.219237 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 04:21:49 crc kubenswrapper[4667]: I0131 04:21:49.219424 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z7p2q" Jan 31 04:21:49 crc kubenswrapper[4667]: I0131 04:21:49.219496 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 04:21:49 crc kubenswrapper[4667]: I0131 04:21:49.219256 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 04:21:49 crc kubenswrapper[4667]: I0131 04:21:49.244224 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xf5nb"] Jan 31 04:21:49 crc kubenswrapper[4667]: I0131 04:21:49.263803 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nktp\" (UniqueName: \"kubernetes.io/projected/65aa0404-25e7-4a24-8edf-ceae5320b02e-kube-api-access-2nktp\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xf5nb\" (UID: \"65aa0404-25e7-4a24-8edf-ceae5320b02e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xf5nb" Jan 31 04:21:49 crc kubenswrapper[4667]: I0131 04:21:49.263904 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65aa0404-25e7-4a24-8edf-ceae5320b02e-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xf5nb\" (UID: \"65aa0404-25e7-4a24-8edf-ceae5320b02e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xf5nb" Jan 31 04:21:49 crc kubenswrapper[4667]: I0131 04:21:49.263939 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/65aa0404-25e7-4a24-8edf-ceae5320b02e-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xf5nb\" (UID: \"65aa0404-25e7-4a24-8edf-ceae5320b02e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xf5nb" Jan 31 04:21:49 crc kubenswrapper[4667]: I0131 04:21:49.366004 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65aa0404-25e7-4a24-8edf-ceae5320b02e-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xf5nb\" (UID: \"65aa0404-25e7-4a24-8edf-ceae5320b02e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xf5nb" Jan 31 04:21:49 crc kubenswrapper[4667]: I0131 04:21:49.366090 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/65aa0404-25e7-4a24-8edf-ceae5320b02e-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xf5nb\" (UID: \"65aa0404-25e7-4a24-8edf-ceae5320b02e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xf5nb" Jan 31 04:21:49 crc kubenswrapper[4667]: I0131 04:21:49.366290 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nktp\" (UniqueName: \"kubernetes.io/projected/65aa0404-25e7-4a24-8edf-ceae5320b02e-kube-api-access-2nktp\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xf5nb\" (UID: \"65aa0404-25e7-4a24-8edf-ceae5320b02e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xf5nb" Jan 31 04:21:49 crc kubenswrapper[4667]: I0131 04:21:49.372079 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65aa0404-25e7-4a24-8edf-ceae5320b02e-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xf5nb\" (UID: \"65aa0404-25e7-4a24-8edf-ceae5320b02e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xf5nb" Jan 31 04:21:49 crc kubenswrapper[4667]: I0131 04:21:49.374596 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/65aa0404-25e7-4a24-8edf-ceae5320b02e-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xf5nb\" (UID: \"65aa0404-25e7-4a24-8edf-ceae5320b02e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xf5nb" Jan 31 04:21:49 crc kubenswrapper[4667]: I0131 04:21:49.389572 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nktp\" (UniqueName: \"kubernetes.io/projected/65aa0404-25e7-4a24-8edf-ceae5320b02e-kube-api-access-2nktp\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xf5nb\" (UID: \"65aa0404-25e7-4a24-8edf-ceae5320b02e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xf5nb" Jan 31 04:21:49 crc kubenswrapper[4667]: I0131 04:21:49.541117 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xf5nb" Jan 31 04:21:50 crc kubenswrapper[4667]: I0131 04:21:50.181757 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xf5nb"] Jan 31 04:21:51 crc kubenswrapper[4667]: I0131 04:21:51.107556 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xf5nb" event={"ID":"65aa0404-25e7-4a24-8edf-ceae5320b02e","Type":"ContainerStarted","Data":"fb181f9fed48bdbe578ea66d3e2b50f9afb8ca00199e88f6e98d0b88fe0d4ed0"} Jan 31 04:21:51 crc kubenswrapper[4667]: I0131 04:21:51.108561 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xf5nb" event={"ID":"65aa0404-25e7-4a24-8edf-ceae5320b02e","Type":"ContainerStarted","Data":"bc860b5df7a2253594c97ea57015dc41c75e504a26577f423b62788e5ba731c5"} Jan 31 04:21:51 crc kubenswrapper[4667]: I0131 04:21:51.144336 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xf5nb" podStartSLOduration=1.706074619 podStartE2EDuration="2.144304064s" podCreationTimestamp="2026-01-31 04:21:49 +0000 UTC" firstStartedPulling="2026-01-31 04:21:50.18910219 +0000 UTC m=+2033.705437489" lastFinishedPulling="2026-01-31 04:21:50.627331605 +0000 UTC m=+2034.143666934" observedRunningTime="2026-01-31 04:21:51.131098524 +0000 UTC m=+2034.647433853" watchObservedRunningTime="2026-01-31 04:21:51.144304064 +0000 UTC m=+2034.660639383" Jan 31 04:22:01 crc kubenswrapper[4667]: I0131 04:22:01.221292 4667 generic.go:334] "Generic (PLEG): container finished" podID="65aa0404-25e7-4a24-8edf-ceae5320b02e" containerID="fb181f9fed48bdbe578ea66d3e2b50f9afb8ca00199e88f6e98d0b88fe0d4ed0" exitCode=0 Jan 31 04:22:01 crc kubenswrapper[4667]: I0131 04:22:01.221395 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xf5nb" event={"ID":"65aa0404-25e7-4a24-8edf-ceae5320b02e","Type":"ContainerDied","Data":"fb181f9fed48bdbe578ea66d3e2b50f9afb8ca00199e88f6e98d0b88fe0d4ed0"} Jan 31 04:22:02 crc kubenswrapper[4667]: I0131 04:22:02.722363 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xf5nb" Jan 31 04:22:02 crc kubenswrapper[4667]: I0131 04:22:02.758554 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/65aa0404-25e7-4a24-8edf-ceae5320b02e-ssh-key-openstack-edpm-ipam\") pod \"65aa0404-25e7-4a24-8edf-ceae5320b02e\" (UID: \"65aa0404-25e7-4a24-8edf-ceae5320b02e\") " Jan 31 04:22:02 crc kubenswrapper[4667]: I0131 04:22:02.758630 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65aa0404-25e7-4a24-8edf-ceae5320b02e-inventory\") pod \"65aa0404-25e7-4a24-8edf-ceae5320b02e\" (UID: \"65aa0404-25e7-4a24-8edf-ceae5320b02e\") " Jan 31 04:22:02 crc kubenswrapper[4667]: I0131 04:22:02.758660 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nktp\" (UniqueName: \"kubernetes.io/projected/65aa0404-25e7-4a24-8edf-ceae5320b02e-kube-api-access-2nktp\") pod \"65aa0404-25e7-4a24-8edf-ceae5320b02e\" (UID: \"65aa0404-25e7-4a24-8edf-ceae5320b02e\") " Jan 31 04:22:02 crc kubenswrapper[4667]: I0131 04:22:02.768141 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65aa0404-25e7-4a24-8edf-ceae5320b02e-kube-api-access-2nktp" (OuterVolumeSpecName: "kube-api-access-2nktp") pod "65aa0404-25e7-4a24-8edf-ceae5320b02e" (UID: "65aa0404-25e7-4a24-8edf-ceae5320b02e"). InnerVolumeSpecName "kube-api-access-2nktp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:22:02 crc kubenswrapper[4667]: I0131 04:22:02.796931 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65aa0404-25e7-4a24-8edf-ceae5320b02e-inventory" (OuterVolumeSpecName: "inventory") pod "65aa0404-25e7-4a24-8edf-ceae5320b02e" (UID: "65aa0404-25e7-4a24-8edf-ceae5320b02e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:22:02 crc kubenswrapper[4667]: I0131 04:22:02.800751 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65aa0404-25e7-4a24-8edf-ceae5320b02e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "65aa0404-25e7-4a24-8edf-ceae5320b02e" (UID: "65aa0404-25e7-4a24-8edf-ceae5320b02e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:22:02 crc kubenswrapper[4667]: I0131 04:22:02.861818 4667 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/65aa0404-25e7-4a24-8edf-ceae5320b02e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:22:02 crc kubenswrapper[4667]: I0131 04:22:02.861889 4667 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65aa0404-25e7-4a24-8edf-ceae5320b02e-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 04:22:02 crc kubenswrapper[4667]: I0131 04:22:02.861904 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nktp\" (UniqueName: \"kubernetes.io/projected/65aa0404-25e7-4a24-8edf-ceae5320b02e-kube-api-access-2nktp\") on node \"crc\" DevicePath \"\"" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.244715 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xf5nb" event={"ID":"65aa0404-25e7-4a24-8edf-ceae5320b02e","Type":"ContainerDied","Data":"bc860b5df7a2253594c97ea57015dc41c75e504a26577f423b62788e5ba731c5"} Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.244785 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc860b5df7a2253594c97ea57015dc41c75e504a26577f423b62788e5ba731c5" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.244798 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xf5nb" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.396639 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk"] Jan 31 04:22:03 crc kubenswrapper[4667]: E0131 04:22:03.397152 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65aa0404-25e7-4a24-8edf-ceae5320b02e" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.397173 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="65aa0404-25e7-4a24-8edf-ceae5320b02e" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.397388 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="65aa0404-25e7-4a24-8edf-ceae5320b02e" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.398112 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.401295 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z7p2q" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.401566 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.401593 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.401737 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.401836 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.406905 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.407167 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.408106 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.410136 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk"] Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.481965 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pcntk\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.482544 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pcntk\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.482678 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pcntk\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.482724 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pcntk\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.482788 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pcntk\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.482868 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pcntk\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.482973 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9qwq\" (UniqueName: \"kubernetes.io/projected/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-kube-api-access-q9qwq\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pcntk\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.483064 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pcntk\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.483264 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pcntk\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.483363 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pcntk\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.483451 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pcntk\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.483622 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pcntk\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.483663 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pcntk\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.483696 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pcntk\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.585492 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pcntk\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.587002 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pcntk\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.587258 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pcntk\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.587298 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pcntk\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.587368 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pcntk\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.587409 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pcntk\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.587996 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9qwq\" (UniqueName: \"kubernetes.io/projected/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-kube-api-access-q9qwq\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pcntk\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.588052 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pcntk\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.588141 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pcntk\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.588190 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pcntk\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.588273 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pcntk\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.588627 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pcntk\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.588660 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pcntk\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.588723 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pcntk\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.591314 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pcntk\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.592106 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pcntk\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.592474 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pcntk\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.592570 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pcntk\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.593422 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pcntk\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.597513 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pcntk\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.597705 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pcntk\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.598561 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pcntk\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.599145 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pcntk\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.603493 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pcntk\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.609608 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pcntk\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.616071 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pcntk\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.617032 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pcntk\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.622243 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9qwq\" (UniqueName: \"kubernetes.io/projected/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-kube-api-access-q9qwq\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-pcntk\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:03 crc kubenswrapper[4667]: I0131 04:22:03.728227 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:04 crc kubenswrapper[4667]: I0131 04:22:04.361876 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk"] Jan 31 04:22:05 crc kubenswrapper[4667]: I0131 04:22:05.263816 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" event={"ID":"f33e0c1e-9f27-49a0-8132-0516b49d5ceb","Type":"ContainerStarted","Data":"d5faefd0e77543ecb0fe54fdabec4893cc0ea49947b0796921bcb3f93ca2418e"} Jan 31 04:22:05 crc kubenswrapper[4667]: I0131 04:22:05.264335 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" event={"ID":"f33e0c1e-9f27-49a0-8132-0516b49d5ceb","Type":"ContainerStarted","Data":"1363d97486edde5cb9f50005166eeb6efe0c7a693145ac4c75274d721273cb55"} Jan 31 04:22:05 crc kubenswrapper[4667]: I0131 04:22:05.298213 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" podStartSLOduration=1.867823468 podStartE2EDuration="2.298193855s" podCreationTimestamp="2026-01-31 04:22:03 +0000 UTC" firstStartedPulling="2026-01-31 04:22:04.382856126 +0000 UTC m=+2047.899191425" lastFinishedPulling="2026-01-31 04:22:04.813226513 +0000 UTC m=+2048.329561812" observedRunningTime="2026-01-31 04:22:05.286126425 +0000 UTC m=+2048.802461724" watchObservedRunningTime="2026-01-31 04:22:05.298193855 +0000 UTC m=+2048.814529154" Jan 31 04:22:39 crc kubenswrapper[4667]: I0131 04:22:39.743613 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9544l"] Jan 31 04:22:39 crc kubenswrapper[4667]: I0131 04:22:39.746373 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9544l" Jan 31 04:22:39 crc kubenswrapper[4667]: I0131 04:22:39.765531 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9544l"] Jan 31 04:22:39 crc kubenswrapper[4667]: I0131 04:22:39.829205 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65qlb\" (UniqueName: \"kubernetes.io/projected/5fb797b5-1bf4-47ee-9383-d71fc353cbe6-kube-api-access-65qlb\") pod \"community-operators-9544l\" (UID: \"5fb797b5-1bf4-47ee-9383-d71fc353cbe6\") " pod="openshift-marketplace/community-operators-9544l" Jan 31 04:22:39 crc kubenswrapper[4667]: I0131 04:22:39.829354 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fb797b5-1bf4-47ee-9383-d71fc353cbe6-utilities\") pod \"community-operators-9544l\" (UID: \"5fb797b5-1bf4-47ee-9383-d71fc353cbe6\") " pod="openshift-marketplace/community-operators-9544l" Jan 31 04:22:39 crc kubenswrapper[4667]: I0131 04:22:39.829529 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fb797b5-1bf4-47ee-9383-d71fc353cbe6-catalog-content\") pod \"community-operators-9544l\" (UID: \"5fb797b5-1bf4-47ee-9383-d71fc353cbe6\") " pod="openshift-marketplace/community-operators-9544l" Jan 31 04:22:39 crc kubenswrapper[4667]: I0131 04:22:39.932085 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65qlb\" (UniqueName: \"kubernetes.io/projected/5fb797b5-1bf4-47ee-9383-d71fc353cbe6-kube-api-access-65qlb\") pod \"community-operators-9544l\" (UID: \"5fb797b5-1bf4-47ee-9383-d71fc353cbe6\") " pod="openshift-marketplace/community-operators-9544l" Jan 31 04:22:39 crc kubenswrapper[4667]: I0131 04:22:39.932335 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fb797b5-1bf4-47ee-9383-d71fc353cbe6-utilities\") pod \"community-operators-9544l\" (UID: \"5fb797b5-1bf4-47ee-9383-d71fc353cbe6\") " pod="openshift-marketplace/community-operators-9544l" Jan 31 04:22:39 crc kubenswrapper[4667]: I0131 04:22:39.932520 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fb797b5-1bf4-47ee-9383-d71fc353cbe6-catalog-content\") pod \"community-operators-9544l\" (UID: \"5fb797b5-1bf4-47ee-9383-d71fc353cbe6\") " pod="openshift-marketplace/community-operators-9544l" Jan 31 04:22:39 crc kubenswrapper[4667]: I0131 04:22:39.932858 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fb797b5-1bf4-47ee-9383-d71fc353cbe6-utilities\") pod \"community-operators-9544l\" (UID: \"5fb797b5-1bf4-47ee-9383-d71fc353cbe6\") " pod="openshift-marketplace/community-operators-9544l" Jan 31 04:22:39 crc kubenswrapper[4667]: I0131 04:22:39.933137 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fb797b5-1bf4-47ee-9383-d71fc353cbe6-catalog-content\") pod \"community-operators-9544l\" (UID: \"5fb797b5-1bf4-47ee-9383-d71fc353cbe6\") " pod="openshift-marketplace/community-operators-9544l" Jan 31 04:22:39 crc kubenswrapper[4667]: I0131 04:22:39.978712 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65qlb\" (UniqueName: \"kubernetes.io/projected/5fb797b5-1bf4-47ee-9383-d71fc353cbe6-kube-api-access-65qlb\") pod \"community-operators-9544l\" (UID: \"5fb797b5-1bf4-47ee-9383-d71fc353cbe6\") " pod="openshift-marketplace/community-operators-9544l" Jan 31 04:22:40 crc kubenswrapper[4667]: I0131 04:22:40.071059 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9544l" Jan 31 04:22:40 crc kubenswrapper[4667]: I0131 04:22:40.531480 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9544l"] Jan 31 04:22:40 crc kubenswrapper[4667]: I0131 04:22:40.656009 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9544l" event={"ID":"5fb797b5-1bf4-47ee-9383-d71fc353cbe6","Type":"ContainerStarted","Data":"bad50187f442c89bf3c84e65fe093f45e49383e213896c961d6529b7c2bff4ae"} Jan 31 04:22:41 crc kubenswrapper[4667]: I0131 04:22:41.665285 4667 generic.go:334] "Generic (PLEG): container finished" podID="5fb797b5-1bf4-47ee-9383-d71fc353cbe6" containerID="83b6c019c23f8169416fba886494a66286e5ab62197a9c0c9a9ddbf6f16414df" exitCode=0 Jan 31 04:22:41 crc kubenswrapper[4667]: I0131 04:22:41.665390 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9544l" event={"ID":"5fb797b5-1bf4-47ee-9383-d71fc353cbe6","Type":"ContainerDied","Data":"83b6c019c23f8169416fba886494a66286e5ab62197a9c0c9a9ddbf6f16414df"} Jan 31 04:22:42 crc kubenswrapper[4667]: I0131 04:22:42.676365 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9544l" event={"ID":"5fb797b5-1bf4-47ee-9383-d71fc353cbe6","Type":"ContainerStarted","Data":"ac0233e49177bd835669f1c724c02192762f22b81440dced8f895365a43528b5"} Jan 31 04:22:44 crc kubenswrapper[4667]: I0131 04:22:44.701408 4667 generic.go:334] "Generic (PLEG): container finished" podID="5fb797b5-1bf4-47ee-9383-d71fc353cbe6" containerID="ac0233e49177bd835669f1c724c02192762f22b81440dced8f895365a43528b5" exitCode=0 Jan 31 04:22:44 crc kubenswrapper[4667]: I0131 04:22:44.701744 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9544l" event={"ID":"5fb797b5-1bf4-47ee-9383-d71fc353cbe6","Type":"ContainerDied","Data":"ac0233e49177bd835669f1c724c02192762f22b81440dced8f895365a43528b5"} Jan 31 04:22:45 crc kubenswrapper[4667]: I0131 04:22:45.714001 4667 generic.go:334] "Generic (PLEG): container finished" podID="f33e0c1e-9f27-49a0-8132-0516b49d5ceb" containerID="d5faefd0e77543ecb0fe54fdabec4893cc0ea49947b0796921bcb3f93ca2418e" exitCode=0 Jan 31 04:22:45 crc kubenswrapper[4667]: I0131 04:22:45.714118 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" event={"ID":"f33e0c1e-9f27-49a0-8132-0516b49d5ceb","Type":"ContainerDied","Data":"d5faefd0e77543ecb0fe54fdabec4893cc0ea49947b0796921bcb3f93ca2418e"} Jan 31 04:22:45 crc kubenswrapper[4667]: I0131 04:22:45.717718 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9544l" event={"ID":"5fb797b5-1bf4-47ee-9383-d71fc353cbe6","Type":"ContainerStarted","Data":"9e646935a02c0343613ea9367e675f4e001259de60e340721dab504d3c4e51d0"} Jan 31 04:22:45 crc kubenswrapper[4667]: I0131 04:22:45.773295 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9544l" podStartSLOduration=3.074241385 podStartE2EDuration="6.773247686s" podCreationTimestamp="2026-01-31 04:22:39 +0000 UTC" firstStartedPulling="2026-01-31 04:22:41.669671481 +0000 UTC m=+2085.186006780" lastFinishedPulling="2026-01-31 04:22:45.368677772 +0000 UTC m=+2088.885013081" observedRunningTime="2026-01-31 04:22:45.769059146 +0000 UTC m=+2089.285394445" watchObservedRunningTime="2026-01-31 04:22:45.773247686 +0000 UTC m=+2089.289582995" Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.179129 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.306610 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.306689 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-bootstrap-combined-ca-bundle\") pod \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.306743 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-openstack-edpm-ipam-ovn-default-certs-0\") pod \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.306878 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-inventory\") pod \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.306995 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.307902 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-repo-setup-combined-ca-bundle\") pod \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.307978 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-neutron-metadata-combined-ca-bundle\") pod \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.308096 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.308128 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-ovn-combined-ca-bundle\") pod \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.308170 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-libvirt-combined-ca-bundle\") pod \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.308209 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-ssh-key-openstack-edpm-ipam\") pod \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.308291 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-nova-combined-ca-bundle\") pod \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.308357 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-telemetry-combined-ca-bundle\") pod \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.308478 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9qwq\" (UniqueName: \"kubernetes.io/projected/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-kube-api-access-q9qwq\") pod \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\" (UID: \"f33e0c1e-9f27-49a0-8132-0516b49d5ceb\") " Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.319149 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "f33e0c1e-9f27-49a0-8132-0516b49d5ceb" (UID: "f33e0c1e-9f27-49a0-8132-0516b49d5ceb"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.319552 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "f33e0c1e-9f27-49a0-8132-0516b49d5ceb" (UID: "f33e0c1e-9f27-49a0-8132-0516b49d5ceb"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.320308 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "f33e0c1e-9f27-49a0-8132-0516b49d5ceb" (UID: "f33e0c1e-9f27-49a0-8132-0516b49d5ceb"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.320428 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "f33e0c1e-9f27-49a0-8132-0516b49d5ceb" (UID: "f33e0c1e-9f27-49a0-8132-0516b49d5ceb"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.321581 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "f33e0c1e-9f27-49a0-8132-0516b49d5ceb" (UID: "f33e0c1e-9f27-49a0-8132-0516b49d5ceb"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.321604 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "f33e0c1e-9f27-49a0-8132-0516b49d5ceb" (UID: "f33e0c1e-9f27-49a0-8132-0516b49d5ceb"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.322231 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "f33e0c1e-9f27-49a0-8132-0516b49d5ceb" (UID: "f33e0c1e-9f27-49a0-8132-0516b49d5ceb"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.323002 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "f33e0c1e-9f27-49a0-8132-0516b49d5ceb" (UID: "f33e0c1e-9f27-49a0-8132-0516b49d5ceb"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.323185 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "f33e0c1e-9f27-49a0-8132-0516b49d5ceb" (UID: "f33e0c1e-9f27-49a0-8132-0516b49d5ceb"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.325413 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "f33e0c1e-9f27-49a0-8132-0516b49d5ceb" (UID: "f33e0c1e-9f27-49a0-8132-0516b49d5ceb"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.328067 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-kube-api-access-q9qwq" (OuterVolumeSpecName: "kube-api-access-q9qwq") pod "f33e0c1e-9f27-49a0-8132-0516b49d5ceb" (UID: "f33e0c1e-9f27-49a0-8132-0516b49d5ceb"). InnerVolumeSpecName "kube-api-access-q9qwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.336165 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "f33e0c1e-9f27-49a0-8132-0516b49d5ceb" (UID: "f33e0c1e-9f27-49a0-8132-0516b49d5ceb"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.349376 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f33e0c1e-9f27-49a0-8132-0516b49d5ceb" (UID: "f33e0c1e-9f27-49a0-8132-0516b49d5ceb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.363824 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-inventory" (OuterVolumeSpecName: "inventory") pod "f33e0c1e-9f27-49a0-8132-0516b49d5ceb" (UID: "f33e0c1e-9f27-49a0-8132-0516b49d5ceb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.412485 4667 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.412544 4667 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.412556 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9qwq\" (UniqueName: \"kubernetes.io/projected/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-kube-api-access-q9qwq\") on node \"crc\" DevicePath \"\"" Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.412572 4667 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.412587 4667 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.412601 4667 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.412614 4667 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.412628 4667 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.412641 4667 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.412650 4667 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.412661 4667 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.412671 4667 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.412680 4667 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.412692 4667 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f33e0c1e-9f27-49a0-8132-0516b49d5ceb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.739250 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" event={"ID":"f33e0c1e-9f27-49a0-8132-0516b49d5ceb","Type":"ContainerDied","Data":"1363d97486edde5cb9f50005166eeb6efe0c7a693145ac4c75274d721273cb55"} Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.739322 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1363d97486edde5cb9f50005166eeb6efe0c7a693145ac4c75274d721273cb55" Jan 31 04:22:47 crc kubenswrapper[4667]: I0131 04:22:47.739319 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-pcntk" Jan 31 04:22:48 crc kubenswrapper[4667]: I0131 04:22:48.005582 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-rdvv9"] Jan 31 04:22:48 crc kubenswrapper[4667]: E0131 04:22:48.006415 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f33e0c1e-9f27-49a0-8132-0516b49d5ceb" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 31 04:22:48 crc kubenswrapper[4667]: I0131 04:22:48.006436 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="f33e0c1e-9f27-49a0-8132-0516b49d5ceb" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 31 04:22:48 crc kubenswrapper[4667]: I0131 04:22:48.006621 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="f33e0c1e-9f27-49a0-8132-0516b49d5ceb" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 31 04:22:48 crc kubenswrapper[4667]: I0131 04:22:48.007303 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rdvv9" Jan 31 04:22:48 crc kubenswrapper[4667]: I0131 04:22:48.010412 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 04:22:48 crc kubenswrapper[4667]: I0131 04:22:48.011033 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 04:22:48 crc kubenswrapper[4667]: I0131 04:22:48.011383 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 04:22:48 crc kubenswrapper[4667]: I0131 04:22:48.011954 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 31 04:22:48 crc kubenswrapper[4667]: I0131 04:22:48.015400 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z7p2q" Jan 31 04:22:48 crc kubenswrapper[4667]: I0131 04:22:48.024815 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-rdvv9"] Jan 31 04:22:48 crc kubenswrapper[4667]: I0131 04:22:48.128995 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68a411c8-a168-43be-997d-d8a1313da926-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rdvv9\" (UID: \"68a411c8-a168-43be-997d-d8a1313da926\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rdvv9" Jan 31 04:22:48 crc kubenswrapper[4667]: I0131 04:22:48.129054 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/68a411c8-a168-43be-997d-d8a1313da926-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rdvv9\" (UID: \"68a411c8-a168-43be-997d-d8a1313da926\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rdvv9" Jan 31 04:22:48 crc kubenswrapper[4667]: I0131 04:22:48.129079 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68a411c8-a168-43be-997d-d8a1313da926-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rdvv9\" (UID: \"68a411c8-a168-43be-997d-d8a1313da926\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rdvv9" Jan 31 04:22:48 crc kubenswrapper[4667]: I0131 04:22:48.129112 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67m7h\" (UniqueName: \"kubernetes.io/projected/68a411c8-a168-43be-997d-d8a1313da926-kube-api-access-67m7h\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rdvv9\" (UID: \"68a411c8-a168-43be-997d-d8a1313da926\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rdvv9" Jan 31 04:22:48 crc kubenswrapper[4667]: I0131 04:22:48.129279 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/68a411c8-a168-43be-997d-d8a1313da926-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rdvv9\" (UID: \"68a411c8-a168-43be-997d-d8a1313da926\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rdvv9" Jan 31 04:22:48 crc kubenswrapper[4667]: I0131 04:22:48.231289 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/68a411c8-a168-43be-997d-d8a1313da926-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rdvv9\" (UID: \"68a411c8-a168-43be-997d-d8a1313da926\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rdvv9" Jan 31 04:22:48 crc kubenswrapper[4667]: I0131 04:22:48.231466 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68a411c8-a168-43be-997d-d8a1313da926-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rdvv9\" (UID: \"68a411c8-a168-43be-997d-d8a1313da926\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rdvv9" Jan 31 04:22:48 crc kubenswrapper[4667]: I0131 04:22:48.231487 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/68a411c8-a168-43be-997d-d8a1313da926-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rdvv9\" (UID: \"68a411c8-a168-43be-997d-d8a1313da926\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rdvv9" Jan 31 04:22:48 crc kubenswrapper[4667]: I0131 04:22:48.231506 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68a411c8-a168-43be-997d-d8a1313da926-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rdvv9\" (UID: \"68a411c8-a168-43be-997d-d8a1313da926\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rdvv9" Jan 31 04:22:48 crc kubenswrapper[4667]: I0131 04:22:48.231548 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67m7h\" (UniqueName: \"kubernetes.io/projected/68a411c8-a168-43be-997d-d8a1313da926-kube-api-access-67m7h\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rdvv9\" (UID: \"68a411c8-a168-43be-997d-d8a1313da926\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rdvv9" Jan 31 04:22:48 crc kubenswrapper[4667]: I0131 04:22:48.232617 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/68a411c8-a168-43be-997d-d8a1313da926-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rdvv9\" (UID: \"68a411c8-a168-43be-997d-d8a1313da926\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rdvv9" Jan 31 04:22:48 crc kubenswrapper[4667]: I0131 04:22:48.239631 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68a411c8-a168-43be-997d-d8a1313da926-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rdvv9\" (UID: \"68a411c8-a168-43be-997d-d8a1313da926\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rdvv9" Jan 31 04:22:48 crc kubenswrapper[4667]: I0131 04:22:48.240396 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/68a411c8-a168-43be-997d-d8a1313da926-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rdvv9\" (UID: \"68a411c8-a168-43be-997d-d8a1313da926\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rdvv9" Jan 31 04:22:48 crc kubenswrapper[4667]: I0131 04:22:48.241759 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68a411c8-a168-43be-997d-d8a1313da926-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rdvv9\" (UID: \"68a411c8-a168-43be-997d-d8a1313da926\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rdvv9" Jan 31 04:22:48 crc kubenswrapper[4667]: I0131 04:22:48.269533 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67m7h\" (UniqueName: \"kubernetes.io/projected/68a411c8-a168-43be-997d-d8a1313da926-kube-api-access-67m7h\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rdvv9\" (UID: \"68a411c8-a168-43be-997d-d8a1313da926\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rdvv9" Jan 31 04:22:48 crc kubenswrapper[4667]: I0131 04:22:48.326771 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rdvv9" Jan 31 04:22:49 crc kubenswrapper[4667]: I0131 04:22:49.112482 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-rdvv9"] Jan 31 04:22:49 crc kubenswrapper[4667]: I0131 04:22:49.770270 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rdvv9" event={"ID":"68a411c8-a168-43be-997d-d8a1313da926","Type":"ContainerStarted","Data":"cb494dc0d32be1ee4b36d7def2876dced1eaa65f7f85748dfa98c0d6d77454c9"} Jan 31 04:22:50 crc kubenswrapper[4667]: I0131 04:22:50.071685 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9544l" Jan 31 04:22:50 crc kubenswrapper[4667]: I0131 04:22:50.072146 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9544l" Jan 31 04:22:50 crc kubenswrapper[4667]: I0131 04:22:50.788440 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rdvv9" event={"ID":"68a411c8-a168-43be-997d-d8a1313da926","Type":"ContainerStarted","Data":"3ec2f9ada123b4bb8b98d5c1ab3991668ab53b52223f19f3de68340294544bee"} Jan 31 04:22:50 crc kubenswrapper[4667]: I0131 04:22:50.812391 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rdvv9" podStartSLOduration=3.32612494 podStartE2EDuration="3.812374775s" podCreationTimestamp="2026-01-31 04:22:47 +0000 UTC" firstStartedPulling="2026-01-31 04:22:49.120423698 +0000 UTC m=+2092.636758997" lastFinishedPulling="2026-01-31 04:22:49.606673523 +0000 UTC m=+2093.123008832" observedRunningTime="2026-01-31 04:22:50.807784273 +0000 UTC m=+2094.324119572" watchObservedRunningTime="2026-01-31 04:22:50.812374775 +0000 UTC m=+2094.328710064" Jan 31 04:22:51 crc kubenswrapper[4667]: I0131 04:22:51.125658 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-9544l" podUID="5fb797b5-1bf4-47ee-9383-d71fc353cbe6" containerName="registry-server" probeResult="failure" output=< Jan 31 04:22:51 crc kubenswrapper[4667]: timeout: failed to connect service ":50051" within 1s Jan 31 04:22:51 crc kubenswrapper[4667]: > Jan 31 04:23:00 crc kubenswrapper[4667]: I0131 04:23:00.163664 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9544l" Jan 31 04:23:00 crc kubenswrapper[4667]: I0131 04:23:00.232804 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9544l" Jan 31 04:23:00 crc kubenswrapper[4667]: I0131 04:23:00.407833 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9544l"] Jan 31 04:23:01 crc kubenswrapper[4667]: I0131 04:23:01.895516 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9544l" podUID="5fb797b5-1bf4-47ee-9383-d71fc353cbe6" containerName="registry-server" containerID="cri-o://9e646935a02c0343613ea9367e675f4e001259de60e340721dab504d3c4e51d0" gracePeriod=2 Jan 31 04:23:02 crc kubenswrapper[4667]: I0131 04:23:02.907436 4667 generic.go:334] "Generic (PLEG): container finished" podID="5fb797b5-1bf4-47ee-9383-d71fc353cbe6" containerID="9e646935a02c0343613ea9367e675f4e001259de60e340721dab504d3c4e51d0" exitCode=0 Jan 31 04:23:02 crc kubenswrapper[4667]: I0131 04:23:02.907513 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9544l" event={"ID":"5fb797b5-1bf4-47ee-9383-d71fc353cbe6","Type":"ContainerDied","Data":"9e646935a02c0343613ea9367e675f4e001259de60e340721dab504d3c4e51d0"} Jan 31 04:23:02 crc kubenswrapper[4667]: I0131 04:23:02.908001 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9544l" event={"ID":"5fb797b5-1bf4-47ee-9383-d71fc353cbe6","Type":"ContainerDied","Data":"bad50187f442c89bf3c84e65fe093f45e49383e213896c961d6529b7c2bff4ae"} Jan 31 04:23:02 crc kubenswrapper[4667]: I0131 04:23:02.908021 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bad50187f442c89bf3c84e65fe093f45e49383e213896c961d6529b7c2bff4ae" Jan 31 04:23:02 crc kubenswrapper[4667]: I0131 04:23:02.995493 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9544l" Jan 31 04:23:03 crc kubenswrapper[4667]: I0131 04:23:03.106971 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fb797b5-1bf4-47ee-9383-d71fc353cbe6-catalog-content\") pod \"5fb797b5-1bf4-47ee-9383-d71fc353cbe6\" (UID: \"5fb797b5-1bf4-47ee-9383-d71fc353cbe6\") " Jan 31 04:23:03 crc kubenswrapper[4667]: I0131 04:23:03.107083 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fb797b5-1bf4-47ee-9383-d71fc353cbe6-utilities\") pod \"5fb797b5-1bf4-47ee-9383-d71fc353cbe6\" (UID: \"5fb797b5-1bf4-47ee-9383-d71fc353cbe6\") " Jan 31 04:23:03 crc kubenswrapper[4667]: I0131 04:23:03.107204 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65qlb\" (UniqueName: \"kubernetes.io/projected/5fb797b5-1bf4-47ee-9383-d71fc353cbe6-kube-api-access-65qlb\") pod \"5fb797b5-1bf4-47ee-9383-d71fc353cbe6\" (UID: \"5fb797b5-1bf4-47ee-9383-d71fc353cbe6\") " Jan 31 04:23:03 crc kubenswrapper[4667]: I0131 04:23:03.108424 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fb797b5-1bf4-47ee-9383-d71fc353cbe6-utilities" (OuterVolumeSpecName: "utilities") pod "5fb797b5-1bf4-47ee-9383-d71fc353cbe6" (UID: "5fb797b5-1bf4-47ee-9383-d71fc353cbe6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:23:03 crc kubenswrapper[4667]: I0131 04:23:03.116118 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fb797b5-1bf4-47ee-9383-d71fc353cbe6-kube-api-access-65qlb" (OuterVolumeSpecName: "kube-api-access-65qlb") pod "5fb797b5-1bf4-47ee-9383-d71fc353cbe6" (UID: "5fb797b5-1bf4-47ee-9383-d71fc353cbe6"). InnerVolumeSpecName "kube-api-access-65qlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:23:03 crc kubenswrapper[4667]: I0131 04:23:03.161862 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fb797b5-1bf4-47ee-9383-d71fc353cbe6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5fb797b5-1bf4-47ee-9383-d71fc353cbe6" (UID: "5fb797b5-1bf4-47ee-9383-d71fc353cbe6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:23:03 crc kubenswrapper[4667]: I0131 04:23:03.210229 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65qlb\" (UniqueName: \"kubernetes.io/projected/5fb797b5-1bf4-47ee-9383-d71fc353cbe6-kube-api-access-65qlb\") on node \"crc\" DevicePath \"\"" Jan 31 04:23:03 crc kubenswrapper[4667]: I0131 04:23:03.210260 4667 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fb797b5-1bf4-47ee-9383-d71fc353cbe6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:23:03 crc kubenswrapper[4667]: I0131 04:23:03.210269 4667 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fb797b5-1bf4-47ee-9383-d71fc353cbe6-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:23:03 crc kubenswrapper[4667]: I0131 04:23:03.919430 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9544l" Jan 31 04:23:03 crc kubenswrapper[4667]: I0131 04:23:03.961569 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9544l"] Jan 31 04:23:03 crc kubenswrapper[4667]: I0131 04:23:03.971762 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9544l"] Jan 31 04:23:05 crc kubenswrapper[4667]: I0131 04:23:05.298425 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fb797b5-1bf4-47ee-9383-d71fc353cbe6" path="/var/lib/kubelet/pods/5fb797b5-1bf4-47ee-9383-d71fc353cbe6/volumes" Jan 31 04:23:12 crc kubenswrapper[4667]: I0131 04:23:12.920147 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6ljwg"] Jan 31 04:23:12 crc kubenswrapper[4667]: E0131 04:23:12.921161 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fb797b5-1bf4-47ee-9383-d71fc353cbe6" containerName="registry-server" Jan 31 04:23:12 crc kubenswrapper[4667]: I0131 04:23:12.921181 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fb797b5-1bf4-47ee-9383-d71fc353cbe6" containerName="registry-server" Jan 31 04:23:12 crc kubenswrapper[4667]: E0131 04:23:12.921217 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fb797b5-1bf4-47ee-9383-d71fc353cbe6" containerName="extract-utilities" Jan 31 04:23:12 crc kubenswrapper[4667]: I0131 04:23:12.921225 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fb797b5-1bf4-47ee-9383-d71fc353cbe6" containerName="extract-utilities" Jan 31 04:23:12 crc kubenswrapper[4667]: E0131 04:23:12.921244 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fb797b5-1bf4-47ee-9383-d71fc353cbe6" containerName="extract-content" Jan 31 04:23:12 crc kubenswrapper[4667]: I0131 04:23:12.921253 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fb797b5-1bf4-47ee-9383-d71fc353cbe6" containerName="extract-content" Jan 31 04:23:12 crc kubenswrapper[4667]: I0131 04:23:12.921480 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fb797b5-1bf4-47ee-9383-d71fc353cbe6" containerName="registry-server" Jan 31 04:23:12 crc kubenswrapper[4667]: I0131 04:23:12.923063 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6ljwg" Jan 31 04:23:12 crc kubenswrapper[4667]: I0131 04:23:12.949436 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6ljwg"] Jan 31 04:23:13 crc kubenswrapper[4667]: I0131 04:23:13.040338 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/090c5044-980a-4eeb-8943-23f718765132-utilities\") pod \"redhat-operators-6ljwg\" (UID: \"090c5044-980a-4eeb-8943-23f718765132\") " pod="openshift-marketplace/redhat-operators-6ljwg" Jan 31 04:23:13 crc kubenswrapper[4667]: I0131 04:23:13.040439 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk8lh\" (UniqueName: \"kubernetes.io/projected/090c5044-980a-4eeb-8943-23f718765132-kube-api-access-zk8lh\") pod \"redhat-operators-6ljwg\" (UID: \"090c5044-980a-4eeb-8943-23f718765132\") " pod="openshift-marketplace/redhat-operators-6ljwg" Jan 31 04:23:13 crc kubenswrapper[4667]: I0131 04:23:13.040517 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/090c5044-980a-4eeb-8943-23f718765132-catalog-content\") pod \"redhat-operators-6ljwg\" (UID: \"090c5044-980a-4eeb-8943-23f718765132\") " pod="openshift-marketplace/redhat-operators-6ljwg" Jan 31 04:23:13 crc kubenswrapper[4667]: I0131 04:23:13.142159 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/090c5044-980a-4eeb-8943-23f718765132-catalog-content\") pod \"redhat-operators-6ljwg\" (UID: \"090c5044-980a-4eeb-8943-23f718765132\") " pod="openshift-marketplace/redhat-operators-6ljwg" Jan 31 04:23:13 crc kubenswrapper[4667]: I0131 04:23:13.142256 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/090c5044-980a-4eeb-8943-23f718765132-utilities\") pod \"redhat-operators-6ljwg\" (UID: \"090c5044-980a-4eeb-8943-23f718765132\") " pod="openshift-marketplace/redhat-operators-6ljwg" Jan 31 04:23:13 crc kubenswrapper[4667]: I0131 04:23:13.142329 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk8lh\" (UniqueName: \"kubernetes.io/projected/090c5044-980a-4eeb-8943-23f718765132-kube-api-access-zk8lh\") pod \"redhat-operators-6ljwg\" (UID: \"090c5044-980a-4eeb-8943-23f718765132\") " pod="openshift-marketplace/redhat-operators-6ljwg" Jan 31 04:23:13 crc kubenswrapper[4667]: I0131 04:23:13.143108 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/090c5044-980a-4eeb-8943-23f718765132-catalog-content\") pod \"redhat-operators-6ljwg\" (UID: \"090c5044-980a-4eeb-8943-23f718765132\") " pod="openshift-marketplace/redhat-operators-6ljwg" Jan 31 04:23:13 crc kubenswrapper[4667]: I0131 04:23:13.143159 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/090c5044-980a-4eeb-8943-23f718765132-utilities\") pod \"redhat-operators-6ljwg\" (UID: \"090c5044-980a-4eeb-8943-23f718765132\") " pod="openshift-marketplace/redhat-operators-6ljwg" Jan 31 04:23:13 crc kubenswrapper[4667]: I0131 04:23:13.175355 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk8lh\" (UniqueName: \"kubernetes.io/projected/090c5044-980a-4eeb-8943-23f718765132-kube-api-access-zk8lh\") pod \"redhat-operators-6ljwg\" (UID: \"090c5044-980a-4eeb-8943-23f718765132\") " pod="openshift-marketplace/redhat-operators-6ljwg" Jan 31 04:23:13 crc kubenswrapper[4667]: I0131 04:23:13.248362 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6ljwg" Jan 31 04:23:13 crc kubenswrapper[4667]: I0131 04:23:13.812275 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6ljwg"] Jan 31 04:23:14 crc kubenswrapper[4667]: I0131 04:23:14.021648 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ljwg" event={"ID":"090c5044-980a-4eeb-8943-23f718765132","Type":"ContainerStarted","Data":"606a299aff93af323862a961a976c96f1035791d6d16b368446bececb3acf2fe"} Jan 31 04:23:15 crc kubenswrapper[4667]: I0131 04:23:15.042296 4667 generic.go:334] "Generic (PLEG): container finished" podID="090c5044-980a-4eeb-8943-23f718765132" containerID="67224bf43a912d50940de65d925b7cbb2d3ca75e9151cf08507634f22c7417f6" exitCode=0 Jan 31 04:23:15 crc kubenswrapper[4667]: I0131 04:23:15.042646 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ljwg" event={"ID":"090c5044-980a-4eeb-8943-23f718765132","Type":"ContainerDied","Data":"67224bf43a912d50940de65d925b7cbb2d3ca75e9151cf08507634f22c7417f6"} Jan 31 04:23:16 crc kubenswrapper[4667]: I0131 04:23:16.060655 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ljwg" event={"ID":"090c5044-980a-4eeb-8943-23f718765132","Type":"ContainerStarted","Data":"aaed4bcd31d07d1a623d157cebeaf6d18d0a5dddae923712d47af4008d9413f0"} Jan 31 04:23:22 crc kubenswrapper[4667]: I0131 04:23:22.113979 4667 generic.go:334] "Generic (PLEG): container finished" podID="090c5044-980a-4eeb-8943-23f718765132" containerID="aaed4bcd31d07d1a623d157cebeaf6d18d0a5dddae923712d47af4008d9413f0" exitCode=0 Jan 31 04:23:22 crc kubenswrapper[4667]: I0131 04:23:22.114089 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ljwg" event={"ID":"090c5044-980a-4eeb-8943-23f718765132","Type":"ContainerDied","Data":"aaed4bcd31d07d1a623d157cebeaf6d18d0a5dddae923712d47af4008d9413f0"} Jan 31 04:23:23 crc kubenswrapper[4667]: I0131 04:23:23.123464 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ljwg" event={"ID":"090c5044-980a-4eeb-8943-23f718765132","Type":"ContainerStarted","Data":"623c5c333a0f53be6ea5be509edc63ed4460f0b1eb053b6cc7e5179de7eb0eb5"} Jan 31 04:23:23 crc kubenswrapper[4667]: I0131 04:23:23.248957 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6ljwg" Jan 31 04:23:23 crc kubenswrapper[4667]: I0131 04:23:23.249048 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6ljwg" Jan 31 04:23:24 crc kubenswrapper[4667]: I0131 04:23:24.404625 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6ljwg" podUID="090c5044-980a-4eeb-8943-23f718765132" containerName="registry-server" probeResult="failure" output=< Jan 31 04:23:24 crc kubenswrapper[4667]: timeout: failed to connect service ":50051" within 1s Jan 31 04:23:24 crc kubenswrapper[4667]: > Jan 31 04:23:34 crc kubenswrapper[4667]: I0131 04:23:34.317668 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6ljwg" podUID="090c5044-980a-4eeb-8943-23f718765132" containerName="registry-server" probeResult="failure" output=< Jan 31 04:23:34 crc kubenswrapper[4667]: timeout: failed to connect service ":50051" within 1s Jan 31 04:23:34 crc kubenswrapper[4667]: > Jan 31 04:23:43 crc kubenswrapper[4667]: I0131 04:23:43.326742 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6ljwg" Jan 31 04:23:43 crc kubenswrapper[4667]: I0131 04:23:43.370528 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6ljwg" podStartSLOduration=23.875872786 podStartE2EDuration="31.370495973s" podCreationTimestamp="2026-01-31 04:23:12 +0000 UTC" firstStartedPulling="2026-01-31 04:23:15.045295743 +0000 UTC m=+2118.561631042" lastFinishedPulling="2026-01-31 04:23:22.53991894 +0000 UTC m=+2126.056254229" observedRunningTime="2026-01-31 04:23:23.161278929 +0000 UTC m=+2126.677614228" watchObservedRunningTime="2026-01-31 04:23:43.370495973 +0000 UTC m=+2146.886831312" Jan 31 04:23:43 crc kubenswrapper[4667]: I0131 04:23:43.396328 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6ljwg" Jan 31 04:23:44 crc kubenswrapper[4667]: I0131 04:23:44.124584 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6ljwg"] Jan 31 04:23:45 crc kubenswrapper[4667]: I0131 04:23:45.334247 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6ljwg" podUID="090c5044-980a-4eeb-8943-23f718765132" containerName="registry-server" containerID="cri-o://623c5c333a0f53be6ea5be509edc63ed4460f0b1eb053b6cc7e5179de7eb0eb5" gracePeriod=2 Jan 31 04:23:45 crc kubenswrapper[4667]: I0131 04:23:45.704486 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:23:45 crc kubenswrapper[4667]: I0131 04:23:45.704540 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:23:45 crc kubenswrapper[4667]: I0131 04:23:45.990874 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6ljwg" Jan 31 04:23:46 crc kubenswrapper[4667]: I0131 04:23:46.152237 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zk8lh\" (UniqueName: \"kubernetes.io/projected/090c5044-980a-4eeb-8943-23f718765132-kube-api-access-zk8lh\") pod \"090c5044-980a-4eeb-8943-23f718765132\" (UID: \"090c5044-980a-4eeb-8943-23f718765132\") " Jan 31 04:23:46 crc kubenswrapper[4667]: I0131 04:23:46.152560 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/090c5044-980a-4eeb-8943-23f718765132-utilities\") pod \"090c5044-980a-4eeb-8943-23f718765132\" (UID: \"090c5044-980a-4eeb-8943-23f718765132\") " Jan 31 04:23:46 crc kubenswrapper[4667]: I0131 04:23:46.152593 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/090c5044-980a-4eeb-8943-23f718765132-catalog-content\") pod \"090c5044-980a-4eeb-8943-23f718765132\" (UID: \"090c5044-980a-4eeb-8943-23f718765132\") " Jan 31 04:23:46 crc kubenswrapper[4667]: I0131 04:23:46.153258 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/090c5044-980a-4eeb-8943-23f718765132-utilities" (OuterVolumeSpecName: "utilities") pod "090c5044-980a-4eeb-8943-23f718765132" (UID: "090c5044-980a-4eeb-8943-23f718765132"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:23:46 crc kubenswrapper[4667]: I0131 04:23:46.153607 4667 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/090c5044-980a-4eeb-8943-23f718765132-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:23:46 crc kubenswrapper[4667]: I0131 04:23:46.163325 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/090c5044-980a-4eeb-8943-23f718765132-kube-api-access-zk8lh" (OuterVolumeSpecName: "kube-api-access-zk8lh") pod "090c5044-980a-4eeb-8943-23f718765132" (UID: "090c5044-980a-4eeb-8943-23f718765132"). InnerVolumeSpecName "kube-api-access-zk8lh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:23:46 crc kubenswrapper[4667]: I0131 04:23:46.256352 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zk8lh\" (UniqueName: \"kubernetes.io/projected/090c5044-980a-4eeb-8943-23f718765132-kube-api-access-zk8lh\") on node \"crc\" DevicePath \"\"" Jan 31 04:23:46 crc kubenswrapper[4667]: I0131 04:23:46.283471 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/090c5044-980a-4eeb-8943-23f718765132-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "090c5044-980a-4eeb-8943-23f718765132" (UID: "090c5044-980a-4eeb-8943-23f718765132"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:23:46 crc kubenswrapper[4667]: I0131 04:23:46.350716 4667 generic.go:334] "Generic (PLEG): container finished" podID="090c5044-980a-4eeb-8943-23f718765132" containerID="623c5c333a0f53be6ea5be509edc63ed4460f0b1eb053b6cc7e5179de7eb0eb5" exitCode=0 Jan 31 04:23:46 crc kubenswrapper[4667]: I0131 04:23:46.350875 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6ljwg" Jan 31 04:23:46 crc kubenswrapper[4667]: I0131 04:23:46.350814 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ljwg" event={"ID":"090c5044-980a-4eeb-8943-23f718765132","Type":"ContainerDied","Data":"623c5c333a0f53be6ea5be509edc63ed4460f0b1eb053b6cc7e5179de7eb0eb5"} Jan 31 04:23:46 crc kubenswrapper[4667]: I0131 04:23:46.351355 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ljwg" event={"ID":"090c5044-980a-4eeb-8943-23f718765132","Type":"ContainerDied","Data":"606a299aff93af323862a961a976c96f1035791d6d16b368446bececb3acf2fe"} Jan 31 04:23:46 crc kubenswrapper[4667]: I0131 04:23:46.351376 4667 scope.go:117] "RemoveContainer" containerID="623c5c333a0f53be6ea5be509edc63ed4460f0b1eb053b6cc7e5179de7eb0eb5" Jan 31 04:23:46 crc kubenswrapper[4667]: I0131 04:23:46.358069 4667 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/090c5044-980a-4eeb-8943-23f718765132-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:23:46 crc kubenswrapper[4667]: I0131 04:23:46.382770 4667 scope.go:117] "RemoveContainer" containerID="aaed4bcd31d07d1a623d157cebeaf6d18d0a5dddae923712d47af4008d9413f0" Jan 31 04:23:46 crc kubenswrapper[4667]: I0131 04:23:46.414866 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6ljwg"] Jan 31 04:23:46 crc kubenswrapper[4667]: I0131 04:23:46.424155 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6ljwg"] Jan 31 04:23:46 crc kubenswrapper[4667]: I0131 04:23:46.443601 4667 scope.go:117] "RemoveContainer" containerID="67224bf43a912d50940de65d925b7cbb2d3ca75e9151cf08507634f22c7417f6" Jan 31 04:23:46 crc kubenswrapper[4667]: I0131 04:23:46.462989 4667 scope.go:117] "RemoveContainer" containerID="623c5c333a0f53be6ea5be509edc63ed4460f0b1eb053b6cc7e5179de7eb0eb5" Jan 31 04:23:46 crc kubenswrapper[4667]: E0131 04:23:46.465499 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"623c5c333a0f53be6ea5be509edc63ed4460f0b1eb053b6cc7e5179de7eb0eb5\": container with ID starting with 623c5c333a0f53be6ea5be509edc63ed4460f0b1eb053b6cc7e5179de7eb0eb5 not found: ID does not exist" containerID="623c5c333a0f53be6ea5be509edc63ed4460f0b1eb053b6cc7e5179de7eb0eb5" Jan 31 04:23:46 crc kubenswrapper[4667]: I0131 04:23:46.465550 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"623c5c333a0f53be6ea5be509edc63ed4460f0b1eb053b6cc7e5179de7eb0eb5"} err="failed to get container status \"623c5c333a0f53be6ea5be509edc63ed4460f0b1eb053b6cc7e5179de7eb0eb5\": rpc error: code = NotFound desc = could not find container \"623c5c333a0f53be6ea5be509edc63ed4460f0b1eb053b6cc7e5179de7eb0eb5\": container with ID starting with 623c5c333a0f53be6ea5be509edc63ed4460f0b1eb053b6cc7e5179de7eb0eb5 not found: ID does not exist" Jan 31 04:23:46 crc kubenswrapper[4667]: I0131 04:23:46.465583 4667 scope.go:117] "RemoveContainer" containerID="aaed4bcd31d07d1a623d157cebeaf6d18d0a5dddae923712d47af4008d9413f0" Jan 31 04:23:46 crc kubenswrapper[4667]: E0131 04:23:46.466072 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aaed4bcd31d07d1a623d157cebeaf6d18d0a5dddae923712d47af4008d9413f0\": container with ID starting with aaed4bcd31d07d1a623d157cebeaf6d18d0a5dddae923712d47af4008d9413f0 not found: ID does not exist" containerID="aaed4bcd31d07d1a623d157cebeaf6d18d0a5dddae923712d47af4008d9413f0" Jan 31 04:23:46 crc kubenswrapper[4667]: I0131 04:23:46.466095 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaed4bcd31d07d1a623d157cebeaf6d18d0a5dddae923712d47af4008d9413f0"} err="failed to get container status \"aaed4bcd31d07d1a623d157cebeaf6d18d0a5dddae923712d47af4008d9413f0\": rpc error: code = NotFound desc = could not find container \"aaed4bcd31d07d1a623d157cebeaf6d18d0a5dddae923712d47af4008d9413f0\": container with ID starting with aaed4bcd31d07d1a623d157cebeaf6d18d0a5dddae923712d47af4008d9413f0 not found: ID does not exist" Jan 31 04:23:46 crc kubenswrapper[4667]: I0131 04:23:46.466111 4667 scope.go:117] "RemoveContainer" containerID="67224bf43a912d50940de65d925b7cbb2d3ca75e9151cf08507634f22c7417f6" Jan 31 04:23:46 crc kubenswrapper[4667]: E0131 04:23:46.466717 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67224bf43a912d50940de65d925b7cbb2d3ca75e9151cf08507634f22c7417f6\": container with ID starting with 67224bf43a912d50940de65d925b7cbb2d3ca75e9151cf08507634f22c7417f6 not found: ID does not exist" containerID="67224bf43a912d50940de65d925b7cbb2d3ca75e9151cf08507634f22c7417f6" Jan 31 04:23:46 crc kubenswrapper[4667]: I0131 04:23:46.466736 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67224bf43a912d50940de65d925b7cbb2d3ca75e9151cf08507634f22c7417f6"} err="failed to get container status \"67224bf43a912d50940de65d925b7cbb2d3ca75e9151cf08507634f22c7417f6\": rpc error: code = NotFound desc = could not find container \"67224bf43a912d50940de65d925b7cbb2d3ca75e9151cf08507634f22c7417f6\": container with ID starting with 67224bf43a912d50940de65d925b7cbb2d3ca75e9151cf08507634f22c7417f6 not found: ID does not exist" Jan 31 04:23:47 crc kubenswrapper[4667]: I0131 04:23:47.301662 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="090c5044-980a-4eeb-8943-23f718765132" path="/var/lib/kubelet/pods/090c5044-980a-4eeb-8943-23f718765132/volumes" Jan 31 04:24:01 crc kubenswrapper[4667]: I0131 04:24:01.530377 4667 generic.go:334] "Generic (PLEG): container finished" podID="68a411c8-a168-43be-997d-d8a1313da926" containerID="3ec2f9ada123b4bb8b98d5c1ab3991668ab53b52223f19f3de68340294544bee" exitCode=0 Jan 31 04:24:01 crc kubenswrapper[4667]: I0131 04:24:01.530972 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rdvv9" event={"ID":"68a411c8-a168-43be-997d-d8a1313da926","Type":"ContainerDied","Data":"3ec2f9ada123b4bb8b98d5c1ab3991668ab53b52223f19f3de68340294544bee"} Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.052291 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rdvv9" Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.137308 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68a411c8-a168-43be-997d-d8a1313da926-inventory\") pod \"68a411c8-a168-43be-997d-d8a1313da926\" (UID: \"68a411c8-a168-43be-997d-d8a1313da926\") " Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.137559 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68a411c8-a168-43be-997d-d8a1313da926-ovn-combined-ca-bundle\") pod \"68a411c8-a168-43be-997d-d8a1313da926\" (UID: \"68a411c8-a168-43be-997d-d8a1313da926\") " Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.137591 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/68a411c8-a168-43be-997d-d8a1313da926-ovncontroller-config-0\") pod \"68a411c8-a168-43be-997d-d8a1313da926\" (UID: \"68a411c8-a168-43be-997d-d8a1313da926\") " Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.137643 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/68a411c8-a168-43be-997d-d8a1313da926-ssh-key-openstack-edpm-ipam\") pod \"68a411c8-a168-43be-997d-d8a1313da926\" (UID: \"68a411c8-a168-43be-997d-d8a1313da926\") " Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.137689 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67m7h\" (UniqueName: \"kubernetes.io/projected/68a411c8-a168-43be-997d-d8a1313da926-kube-api-access-67m7h\") pod \"68a411c8-a168-43be-997d-d8a1313da926\" (UID: \"68a411c8-a168-43be-997d-d8a1313da926\") " Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.145110 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68a411c8-a168-43be-997d-d8a1313da926-kube-api-access-67m7h" (OuterVolumeSpecName: "kube-api-access-67m7h") pod "68a411c8-a168-43be-997d-d8a1313da926" (UID: "68a411c8-a168-43be-997d-d8a1313da926"). InnerVolumeSpecName "kube-api-access-67m7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.149007 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68a411c8-a168-43be-997d-d8a1313da926-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "68a411c8-a168-43be-997d-d8a1313da926" (UID: "68a411c8-a168-43be-997d-d8a1313da926"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.171941 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68a411c8-a168-43be-997d-d8a1313da926-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "68a411c8-a168-43be-997d-d8a1313da926" (UID: "68a411c8-a168-43be-997d-d8a1313da926"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.172339 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68a411c8-a168-43be-997d-d8a1313da926-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "68a411c8-a168-43be-997d-d8a1313da926" (UID: "68a411c8-a168-43be-997d-d8a1313da926"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.173988 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68a411c8-a168-43be-997d-d8a1313da926-inventory" (OuterVolumeSpecName: "inventory") pod "68a411c8-a168-43be-997d-d8a1313da926" (UID: "68a411c8-a168-43be-997d-d8a1313da926"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.240226 4667 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/68a411c8-a168-43be-997d-d8a1313da926-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.240517 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67m7h\" (UniqueName: \"kubernetes.io/projected/68a411c8-a168-43be-997d-d8a1313da926-kube-api-access-67m7h\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.240628 4667 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68a411c8-a168-43be-997d-d8a1313da926-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.240709 4667 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68a411c8-a168-43be-997d-d8a1313da926-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.240783 4667 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/68a411c8-a168-43be-997d-d8a1313da926-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.552776 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rdvv9" event={"ID":"68a411c8-a168-43be-997d-d8a1313da926","Type":"ContainerDied","Data":"cb494dc0d32be1ee4b36d7def2876dced1eaa65f7f85748dfa98c0d6d77454c9"} Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.552823 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb494dc0d32be1ee4b36d7def2876dced1eaa65f7f85748dfa98c0d6d77454c9" Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.552858 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rdvv9" Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.726951 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb"] Jan 31 04:24:03 crc kubenswrapper[4667]: E0131 04:24:03.727371 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68a411c8-a168-43be-997d-d8a1313da926" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.727389 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="68a411c8-a168-43be-997d-d8a1313da926" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 31 04:24:03 crc kubenswrapper[4667]: E0131 04:24:03.727417 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="090c5044-980a-4eeb-8943-23f718765132" containerName="extract-utilities" Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.727424 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="090c5044-980a-4eeb-8943-23f718765132" containerName="extract-utilities" Jan 31 04:24:03 crc kubenswrapper[4667]: E0131 04:24:03.727432 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="090c5044-980a-4eeb-8943-23f718765132" containerName="extract-content" Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.727438 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="090c5044-980a-4eeb-8943-23f718765132" containerName="extract-content" Jan 31 04:24:03 crc kubenswrapper[4667]: E0131 04:24:03.727460 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="090c5044-980a-4eeb-8943-23f718765132" containerName="registry-server" Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.727467 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="090c5044-980a-4eeb-8943-23f718765132" containerName="registry-server" Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.727628 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="090c5044-980a-4eeb-8943-23f718765132" containerName="registry-server" Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.727649 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="68a411c8-a168-43be-997d-d8a1313da926" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.728292 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb" Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.731723 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.731828 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z7p2q" Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.731876 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.732518 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.734960 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.737389 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.749121 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb"] Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.854580 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92bb44a8-6936-4c3f-96f6-b9572d90574d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb\" (UID: \"92bb44a8-6936-4c3f-96f6-b9572d90574d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb" Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.854636 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92bb44a8-6936-4c3f-96f6-b9572d90574d-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb\" (UID: \"92bb44a8-6936-4c3f-96f6-b9572d90574d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb" Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.854668 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92bb44a8-6936-4c3f-96f6-b9572d90574d-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb\" (UID: \"92bb44a8-6936-4c3f-96f6-b9572d90574d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb" Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.854693 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjfbf\" (UniqueName: \"kubernetes.io/projected/92bb44a8-6936-4c3f-96f6-b9572d90574d-kube-api-access-tjfbf\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb\" (UID: \"92bb44a8-6936-4c3f-96f6-b9572d90574d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb" Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.855060 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92bb44a8-6936-4c3f-96f6-b9572d90574d-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb\" (UID: \"92bb44a8-6936-4c3f-96f6-b9572d90574d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb" Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.855183 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92bb44a8-6936-4c3f-96f6-b9572d90574d-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb\" (UID: \"92bb44a8-6936-4c3f-96f6-b9572d90574d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb" Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.957204 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92bb44a8-6936-4c3f-96f6-b9572d90574d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb\" (UID: \"92bb44a8-6936-4c3f-96f6-b9572d90574d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb" Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.957274 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92bb44a8-6936-4c3f-96f6-b9572d90574d-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb\" (UID: \"92bb44a8-6936-4c3f-96f6-b9572d90574d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb" Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.957309 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92bb44a8-6936-4c3f-96f6-b9572d90574d-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb\" (UID: \"92bb44a8-6936-4c3f-96f6-b9572d90574d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb" Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.957331 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjfbf\" (UniqueName: \"kubernetes.io/projected/92bb44a8-6936-4c3f-96f6-b9572d90574d-kube-api-access-tjfbf\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb\" (UID: \"92bb44a8-6936-4c3f-96f6-b9572d90574d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb" Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.957399 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92bb44a8-6936-4c3f-96f6-b9572d90574d-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb\" (UID: \"92bb44a8-6936-4c3f-96f6-b9572d90574d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb" Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.957424 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92bb44a8-6936-4c3f-96f6-b9572d90574d-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb\" (UID: \"92bb44a8-6936-4c3f-96f6-b9572d90574d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb" Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.961623 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92bb44a8-6936-4c3f-96f6-b9572d90574d-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb\" (UID: \"92bb44a8-6936-4c3f-96f6-b9572d90574d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb" Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.962203 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92bb44a8-6936-4c3f-96f6-b9572d90574d-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb\" (UID: \"92bb44a8-6936-4c3f-96f6-b9572d90574d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb" Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.963096 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92bb44a8-6936-4c3f-96f6-b9572d90574d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb\" (UID: \"92bb44a8-6936-4c3f-96f6-b9572d90574d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb" Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.963338 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92bb44a8-6936-4c3f-96f6-b9572d90574d-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb\" (UID: \"92bb44a8-6936-4c3f-96f6-b9572d90574d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb" Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.964632 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92bb44a8-6936-4c3f-96f6-b9572d90574d-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb\" (UID: \"92bb44a8-6936-4c3f-96f6-b9572d90574d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb" Jan 31 04:24:03 crc kubenswrapper[4667]: I0131 04:24:03.975320 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjfbf\" (UniqueName: \"kubernetes.io/projected/92bb44a8-6936-4c3f-96f6-b9572d90574d-kube-api-access-tjfbf\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb\" (UID: \"92bb44a8-6936-4c3f-96f6-b9572d90574d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb" Jan 31 04:24:04 crc kubenswrapper[4667]: I0131 04:24:04.083867 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb" Jan 31 04:24:04 crc kubenswrapper[4667]: I0131 04:24:04.575127 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb"] Jan 31 04:24:05 crc kubenswrapper[4667]: I0131 04:24:05.572874 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb" event={"ID":"92bb44a8-6936-4c3f-96f6-b9572d90574d","Type":"ContainerStarted","Data":"77f59a11d135a2f4bd8b1700a213c915460f5addce7a87c4ca389d8d681213a3"} Jan 31 04:24:05 crc kubenswrapper[4667]: I0131 04:24:05.573201 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb" event={"ID":"92bb44a8-6936-4c3f-96f6-b9572d90574d","Type":"ContainerStarted","Data":"09b6b6be778997b1146b4299c6ece8aa19f92e75b3729062d448e16497599305"} Jan 31 04:24:05 crc kubenswrapper[4667]: I0131 04:24:05.598168 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb" podStartSLOduration=2.049586294 podStartE2EDuration="2.598139589s" podCreationTimestamp="2026-01-31 04:24:03 +0000 UTC" firstStartedPulling="2026-01-31 04:24:04.599352718 +0000 UTC m=+2168.115688017" lastFinishedPulling="2026-01-31 04:24:05.147905973 +0000 UTC m=+2168.664241312" observedRunningTime="2026-01-31 04:24:05.593106486 +0000 UTC m=+2169.109441805" watchObservedRunningTime="2026-01-31 04:24:05.598139589 +0000 UTC m=+2169.114474898" Jan 31 04:24:15 crc kubenswrapper[4667]: I0131 04:24:15.704610 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:24:15 crc kubenswrapper[4667]: I0131 04:24:15.705611 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:24:41 crc kubenswrapper[4667]: I0131 04:24:41.348259 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tg2c4"] Jan 31 04:24:41 crc kubenswrapper[4667]: I0131 04:24:41.352044 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tg2c4" Jan 31 04:24:41 crc kubenswrapper[4667]: I0131 04:24:41.365123 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tg2c4"] Jan 31 04:24:41 crc kubenswrapper[4667]: I0131 04:24:41.502168 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56e8c969-d735-4dce-9435-b64960e2805b-utilities\") pod \"certified-operators-tg2c4\" (UID: \"56e8c969-d735-4dce-9435-b64960e2805b\") " pod="openshift-marketplace/certified-operators-tg2c4" Jan 31 04:24:41 crc kubenswrapper[4667]: I0131 04:24:41.502230 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56e8c969-d735-4dce-9435-b64960e2805b-catalog-content\") pod \"certified-operators-tg2c4\" (UID: \"56e8c969-d735-4dce-9435-b64960e2805b\") " pod="openshift-marketplace/certified-operators-tg2c4" Jan 31 04:24:41 crc kubenswrapper[4667]: I0131 04:24:41.502667 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nh72\" (UniqueName: \"kubernetes.io/projected/56e8c969-d735-4dce-9435-b64960e2805b-kube-api-access-4nh72\") pod \"certified-operators-tg2c4\" (UID: \"56e8c969-d735-4dce-9435-b64960e2805b\") " pod="openshift-marketplace/certified-operators-tg2c4" Jan 31 04:24:41 crc kubenswrapper[4667]: I0131 04:24:41.605525 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56e8c969-d735-4dce-9435-b64960e2805b-utilities\") pod \"certified-operators-tg2c4\" (UID: \"56e8c969-d735-4dce-9435-b64960e2805b\") " pod="openshift-marketplace/certified-operators-tg2c4" Jan 31 04:24:41 crc kubenswrapper[4667]: I0131 04:24:41.605584 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56e8c969-d735-4dce-9435-b64960e2805b-catalog-content\") pod \"certified-operators-tg2c4\" (UID: \"56e8c969-d735-4dce-9435-b64960e2805b\") " pod="openshift-marketplace/certified-operators-tg2c4" Jan 31 04:24:41 crc kubenswrapper[4667]: I0131 04:24:41.605797 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nh72\" (UniqueName: \"kubernetes.io/projected/56e8c969-d735-4dce-9435-b64960e2805b-kube-api-access-4nh72\") pod \"certified-operators-tg2c4\" (UID: \"56e8c969-d735-4dce-9435-b64960e2805b\") " pod="openshift-marketplace/certified-operators-tg2c4" Jan 31 04:24:41 crc kubenswrapper[4667]: I0131 04:24:41.606428 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56e8c969-d735-4dce-9435-b64960e2805b-utilities\") pod \"certified-operators-tg2c4\" (UID: \"56e8c969-d735-4dce-9435-b64960e2805b\") " pod="openshift-marketplace/certified-operators-tg2c4" Jan 31 04:24:41 crc kubenswrapper[4667]: I0131 04:24:41.606969 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56e8c969-d735-4dce-9435-b64960e2805b-catalog-content\") pod \"certified-operators-tg2c4\" (UID: \"56e8c969-d735-4dce-9435-b64960e2805b\") " pod="openshift-marketplace/certified-operators-tg2c4" Jan 31 04:24:41 crc kubenswrapper[4667]: I0131 04:24:41.632042 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nh72\" (UniqueName: \"kubernetes.io/projected/56e8c969-d735-4dce-9435-b64960e2805b-kube-api-access-4nh72\") pod \"certified-operators-tg2c4\" (UID: \"56e8c969-d735-4dce-9435-b64960e2805b\") " pod="openshift-marketplace/certified-operators-tg2c4" Jan 31 04:24:41 crc kubenswrapper[4667]: I0131 04:24:41.683054 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tg2c4" Jan 31 04:24:42 crc kubenswrapper[4667]: I0131 04:24:42.207929 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tg2c4"] Jan 31 04:24:42 crc kubenswrapper[4667]: I0131 04:24:42.967949 4667 generic.go:334] "Generic (PLEG): container finished" podID="56e8c969-d735-4dce-9435-b64960e2805b" containerID="2cdf41ed2c3eac400821d1776d364111e3682d3b158dc5a54fcac32f660fa715" exitCode=0 Jan 31 04:24:42 crc kubenswrapper[4667]: I0131 04:24:42.968180 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tg2c4" event={"ID":"56e8c969-d735-4dce-9435-b64960e2805b","Type":"ContainerDied","Data":"2cdf41ed2c3eac400821d1776d364111e3682d3b158dc5a54fcac32f660fa715"} Jan 31 04:24:42 crc kubenswrapper[4667]: I0131 04:24:42.968368 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tg2c4" event={"ID":"56e8c969-d735-4dce-9435-b64960e2805b","Type":"ContainerStarted","Data":"4df138dd987c64c939dd29c1ef87be925a86de35b6c40b8ece64c8830bfadf1d"} Jan 31 04:24:43 crc kubenswrapper[4667]: I0131 04:24:43.939859 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mq7tf"] Jan 31 04:24:43 crc kubenswrapper[4667]: I0131 04:24:43.943296 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mq7tf" Jan 31 04:24:43 crc kubenswrapper[4667]: I0131 04:24:43.963222 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mq7tf"] Jan 31 04:24:43 crc kubenswrapper[4667]: I0131 04:24:43.992217 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08d5db9a-ffda-439f-a9cf-ff6b06044900-utilities\") pod \"redhat-marketplace-mq7tf\" (UID: \"08d5db9a-ffda-439f-a9cf-ff6b06044900\") " pod="openshift-marketplace/redhat-marketplace-mq7tf" Jan 31 04:24:43 crc kubenswrapper[4667]: I0131 04:24:43.992327 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlgnv\" (UniqueName: \"kubernetes.io/projected/08d5db9a-ffda-439f-a9cf-ff6b06044900-kube-api-access-zlgnv\") pod \"redhat-marketplace-mq7tf\" (UID: \"08d5db9a-ffda-439f-a9cf-ff6b06044900\") " pod="openshift-marketplace/redhat-marketplace-mq7tf" Jan 31 04:24:43 crc kubenswrapper[4667]: I0131 04:24:43.992447 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08d5db9a-ffda-439f-a9cf-ff6b06044900-catalog-content\") pod \"redhat-marketplace-mq7tf\" (UID: \"08d5db9a-ffda-439f-a9cf-ff6b06044900\") " pod="openshift-marketplace/redhat-marketplace-mq7tf" Jan 31 04:24:44 crc kubenswrapper[4667]: I0131 04:24:44.004749 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tg2c4" event={"ID":"56e8c969-d735-4dce-9435-b64960e2805b","Type":"ContainerStarted","Data":"ccf83f0881df6e7a13ef5042d305e40a8397af1201167f32a06c6f69ba1f4883"} Jan 31 04:24:44 crc kubenswrapper[4667]: I0131 04:24:44.095820 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08d5db9a-ffda-439f-a9cf-ff6b06044900-utilities\") pod \"redhat-marketplace-mq7tf\" (UID: \"08d5db9a-ffda-439f-a9cf-ff6b06044900\") " pod="openshift-marketplace/redhat-marketplace-mq7tf" Jan 31 04:24:44 crc kubenswrapper[4667]: I0131 04:24:44.095940 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlgnv\" (UniqueName: \"kubernetes.io/projected/08d5db9a-ffda-439f-a9cf-ff6b06044900-kube-api-access-zlgnv\") pod \"redhat-marketplace-mq7tf\" (UID: \"08d5db9a-ffda-439f-a9cf-ff6b06044900\") " pod="openshift-marketplace/redhat-marketplace-mq7tf" Jan 31 04:24:44 crc kubenswrapper[4667]: I0131 04:24:44.096029 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08d5db9a-ffda-439f-a9cf-ff6b06044900-catalog-content\") pod \"redhat-marketplace-mq7tf\" (UID: \"08d5db9a-ffda-439f-a9cf-ff6b06044900\") " pod="openshift-marketplace/redhat-marketplace-mq7tf" Jan 31 04:24:44 crc kubenswrapper[4667]: I0131 04:24:44.096503 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08d5db9a-ffda-439f-a9cf-ff6b06044900-catalog-content\") pod \"redhat-marketplace-mq7tf\" (UID: \"08d5db9a-ffda-439f-a9cf-ff6b06044900\") " pod="openshift-marketplace/redhat-marketplace-mq7tf" Jan 31 04:24:44 crc kubenswrapper[4667]: I0131 04:24:44.096506 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08d5db9a-ffda-439f-a9cf-ff6b06044900-utilities\") pod \"redhat-marketplace-mq7tf\" (UID: \"08d5db9a-ffda-439f-a9cf-ff6b06044900\") " pod="openshift-marketplace/redhat-marketplace-mq7tf" Jan 31 04:24:44 crc kubenswrapper[4667]: I0131 04:24:44.122974 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlgnv\" (UniqueName: \"kubernetes.io/projected/08d5db9a-ffda-439f-a9cf-ff6b06044900-kube-api-access-zlgnv\") pod \"redhat-marketplace-mq7tf\" (UID: \"08d5db9a-ffda-439f-a9cf-ff6b06044900\") " pod="openshift-marketplace/redhat-marketplace-mq7tf" Jan 31 04:24:44 crc kubenswrapper[4667]: I0131 04:24:44.308375 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mq7tf" Jan 31 04:24:44 crc kubenswrapper[4667]: I0131 04:24:44.829314 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mq7tf"] Jan 31 04:24:45 crc kubenswrapper[4667]: I0131 04:24:45.050687 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mq7tf" event={"ID":"08d5db9a-ffda-439f-a9cf-ff6b06044900","Type":"ContainerStarted","Data":"16fbb44dbad1f18620617e4d4576563f1a9bf4a1ea10a8dbf585c16dffe6e791"} Jan 31 04:24:45 crc kubenswrapper[4667]: I0131 04:24:45.704956 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:24:45 crc kubenswrapper[4667]: I0131 04:24:45.705738 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:24:45 crc kubenswrapper[4667]: I0131 04:24:45.706055 4667 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" Jan 31 04:24:45 crc kubenswrapper[4667]: I0131 04:24:45.707708 4667 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ef1eb767bd280ddd0c820466d361f32d61a4b5810e115e81b09dba157f785aad"} pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 04:24:45 crc kubenswrapper[4667]: I0131 04:24:45.707896 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" containerID="cri-o://ef1eb767bd280ddd0c820466d361f32d61a4b5810e115e81b09dba157f785aad" gracePeriod=600 Jan 31 04:24:46 crc kubenswrapper[4667]: I0131 04:24:46.064187 4667 generic.go:334] "Generic (PLEG): container finished" podID="08d5db9a-ffda-439f-a9cf-ff6b06044900" containerID="d15ee424e0ed04c9e99adc36335e24e4fb1344db8ec502eb3be208dbf673a7c4" exitCode=0 Jan 31 04:24:46 crc kubenswrapper[4667]: I0131 04:24:46.065044 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mq7tf" event={"ID":"08d5db9a-ffda-439f-a9cf-ff6b06044900","Type":"ContainerDied","Data":"d15ee424e0ed04c9e99adc36335e24e4fb1344db8ec502eb3be208dbf673a7c4"} Jan 31 04:24:46 crc kubenswrapper[4667]: I0131 04:24:46.071790 4667 generic.go:334] "Generic (PLEG): container finished" podID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerID="ef1eb767bd280ddd0c820466d361f32d61a4b5810e115e81b09dba157f785aad" exitCode=0 Jan 31 04:24:46 crc kubenswrapper[4667]: I0131 04:24:46.072127 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" event={"ID":"b103bbd2-fb5d-4b2a-8b01-c32f699757df","Type":"ContainerDied","Data":"ef1eb767bd280ddd0c820466d361f32d61a4b5810e115e81b09dba157f785aad"} Jan 31 04:24:46 crc kubenswrapper[4667]: I0131 04:24:46.072174 4667 scope.go:117] "RemoveContainer" containerID="52796184d23595b846472c11c5dceaaa8d9b03476b3cbc4f47edf0ad21ac1e50" Jan 31 04:24:46 crc kubenswrapper[4667]: I0131 04:24:46.079595 4667 generic.go:334] "Generic (PLEG): container finished" podID="56e8c969-d735-4dce-9435-b64960e2805b" containerID="ccf83f0881df6e7a13ef5042d305e40a8397af1201167f32a06c6f69ba1f4883" exitCode=0 Jan 31 04:24:46 crc kubenswrapper[4667]: I0131 04:24:46.079643 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tg2c4" event={"ID":"56e8c969-d735-4dce-9435-b64960e2805b","Type":"ContainerDied","Data":"ccf83f0881df6e7a13ef5042d305e40a8397af1201167f32a06c6f69ba1f4883"} Jan 31 04:24:47 crc kubenswrapper[4667]: I0131 04:24:47.094900 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tg2c4" event={"ID":"56e8c969-d735-4dce-9435-b64960e2805b","Type":"ContainerStarted","Data":"1d5285c657fe88f5cac0197c55d7f6239a5fb237048443d0aa4c45d889c23528"} Jan 31 04:24:47 crc kubenswrapper[4667]: I0131 04:24:47.099121 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" event={"ID":"b103bbd2-fb5d-4b2a-8b01-c32f699757df","Type":"ContainerStarted","Data":"c179b2f38e008b6a7310f1984b183ee74fb222f8cc8019eb62046e0a9a89867f"} Jan 31 04:24:47 crc kubenswrapper[4667]: I0131 04:24:47.125472 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tg2c4" podStartSLOduration=2.536463369 podStartE2EDuration="6.125452155s" podCreationTimestamp="2026-01-31 04:24:41 +0000 UTC" firstStartedPulling="2026-01-31 04:24:42.969791543 +0000 UTC m=+2206.486126842" lastFinishedPulling="2026-01-31 04:24:46.558780319 +0000 UTC m=+2210.075115628" observedRunningTime="2026-01-31 04:24:47.11697959 +0000 UTC m=+2210.633314889" watchObservedRunningTime="2026-01-31 04:24:47.125452155 +0000 UTC m=+2210.641787454" Jan 31 04:24:48 crc kubenswrapper[4667]: I0131 04:24:48.111825 4667 generic.go:334] "Generic (PLEG): container finished" podID="08d5db9a-ffda-439f-a9cf-ff6b06044900" containerID="f833e3e70974c19a840e98c930a7025d0f7349462bcac9b07c5556f334462952" exitCode=0 Jan 31 04:24:48 crc kubenswrapper[4667]: I0131 04:24:48.111996 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mq7tf" event={"ID":"08d5db9a-ffda-439f-a9cf-ff6b06044900","Type":"ContainerDied","Data":"f833e3e70974c19a840e98c930a7025d0f7349462bcac9b07c5556f334462952"} Jan 31 04:24:49 crc kubenswrapper[4667]: I0131 04:24:49.124632 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mq7tf" event={"ID":"08d5db9a-ffda-439f-a9cf-ff6b06044900","Type":"ContainerStarted","Data":"75773376e9e27acbd5c483019dc64fb26630e4cdcaa6213d200828666bc4bbd2"} Jan 31 04:24:49 crc kubenswrapper[4667]: I0131 04:24:49.149140 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mq7tf" podStartSLOduration=3.628897748 podStartE2EDuration="6.149116008s" podCreationTimestamp="2026-01-31 04:24:43 +0000 UTC" firstStartedPulling="2026-01-31 04:24:46.070977146 +0000 UTC m=+2209.587312445" lastFinishedPulling="2026-01-31 04:24:48.591195366 +0000 UTC m=+2212.107530705" observedRunningTime="2026-01-31 04:24:49.141672311 +0000 UTC m=+2212.658007620" watchObservedRunningTime="2026-01-31 04:24:49.149116008 +0000 UTC m=+2212.665451307" Jan 31 04:24:51 crc kubenswrapper[4667]: I0131 04:24:51.684131 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tg2c4" Jan 31 04:24:51 crc kubenswrapper[4667]: I0131 04:24:51.685004 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tg2c4" Jan 31 04:24:51 crc kubenswrapper[4667]: I0131 04:24:51.744329 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tg2c4" Jan 31 04:24:52 crc kubenswrapper[4667]: I0131 04:24:52.220699 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tg2c4" Jan 31 04:24:53 crc kubenswrapper[4667]: I0131 04:24:53.340907 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tg2c4"] Jan 31 04:24:54 crc kubenswrapper[4667]: I0131 04:24:54.185534 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tg2c4" podUID="56e8c969-d735-4dce-9435-b64960e2805b" containerName="registry-server" containerID="cri-o://1d5285c657fe88f5cac0197c55d7f6239a5fb237048443d0aa4c45d889c23528" gracePeriod=2 Jan 31 04:24:54 crc kubenswrapper[4667]: I0131 04:24:54.308605 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mq7tf" Jan 31 04:24:54 crc kubenswrapper[4667]: I0131 04:24:54.308691 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mq7tf" Jan 31 04:24:54 crc kubenswrapper[4667]: I0131 04:24:54.415617 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mq7tf" Jan 31 04:24:54 crc kubenswrapper[4667]: I0131 04:24:54.819780 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tg2c4" Jan 31 04:24:54 crc kubenswrapper[4667]: I0131 04:24:54.896746 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56e8c969-d735-4dce-9435-b64960e2805b-utilities\") pod \"56e8c969-d735-4dce-9435-b64960e2805b\" (UID: \"56e8c969-d735-4dce-9435-b64960e2805b\") " Jan 31 04:24:54 crc kubenswrapper[4667]: I0131 04:24:54.897264 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nh72\" (UniqueName: \"kubernetes.io/projected/56e8c969-d735-4dce-9435-b64960e2805b-kube-api-access-4nh72\") pod \"56e8c969-d735-4dce-9435-b64960e2805b\" (UID: \"56e8c969-d735-4dce-9435-b64960e2805b\") " Jan 31 04:24:54 crc kubenswrapper[4667]: I0131 04:24:54.897412 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56e8c969-d735-4dce-9435-b64960e2805b-catalog-content\") pod \"56e8c969-d735-4dce-9435-b64960e2805b\" (UID: \"56e8c969-d735-4dce-9435-b64960e2805b\") " Jan 31 04:24:54 crc kubenswrapper[4667]: I0131 04:24:54.897662 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56e8c969-d735-4dce-9435-b64960e2805b-utilities" (OuterVolumeSpecName: "utilities") pod "56e8c969-d735-4dce-9435-b64960e2805b" (UID: "56e8c969-d735-4dce-9435-b64960e2805b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:24:54 crc kubenswrapper[4667]: I0131 04:24:54.898028 4667 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56e8c969-d735-4dce-9435-b64960e2805b-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:54 crc kubenswrapper[4667]: I0131 04:24:54.921886 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56e8c969-d735-4dce-9435-b64960e2805b-kube-api-access-4nh72" (OuterVolumeSpecName: "kube-api-access-4nh72") pod "56e8c969-d735-4dce-9435-b64960e2805b" (UID: "56e8c969-d735-4dce-9435-b64960e2805b"). InnerVolumeSpecName "kube-api-access-4nh72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:54 crc kubenswrapper[4667]: I0131 04:24:54.968888 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56e8c969-d735-4dce-9435-b64960e2805b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "56e8c969-d735-4dce-9435-b64960e2805b" (UID: "56e8c969-d735-4dce-9435-b64960e2805b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:24:55 crc kubenswrapper[4667]: I0131 04:24:55.000241 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nh72\" (UniqueName: \"kubernetes.io/projected/56e8c969-d735-4dce-9435-b64960e2805b-kube-api-access-4nh72\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:55 crc kubenswrapper[4667]: I0131 04:24:55.000282 4667 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56e8c969-d735-4dce-9435-b64960e2805b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:55 crc kubenswrapper[4667]: I0131 04:24:55.200326 4667 generic.go:334] "Generic (PLEG): container finished" podID="56e8c969-d735-4dce-9435-b64960e2805b" containerID="1d5285c657fe88f5cac0197c55d7f6239a5fb237048443d0aa4c45d889c23528" exitCode=0 Jan 31 04:24:55 crc kubenswrapper[4667]: I0131 04:24:55.200422 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tg2c4" event={"ID":"56e8c969-d735-4dce-9435-b64960e2805b","Type":"ContainerDied","Data":"1d5285c657fe88f5cac0197c55d7f6239a5fb237048443d0aa4c45d889c23528"} Jan 31 04:24:55 crc kubenswrapper[4667]: I0131 04:24:55.200731 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tg2c4" event={"ID":"56e8c969-d735-4dce-9435-b64960e2805b","Type":"ContainerDied","Data":"4df138dd987c64c939dd29c1ef87be925a86de35b6c40b8ece64c8830bfadf1d"} Jan 31 04:24:55 crc kubenswrapper[4667]: I0131 04:24:55.200509 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tg2c4" Jan 31 04:24:55 crc kubenswrapper[4667]: I0131 04:24:55.200940 4667 scope.go:117] "RemoveContainer" containerID="1d5285c657fe88f5cac0197c55d7f6239a5fb237048443d0aa4c45d889c23528" Jan 31 04:24:55 crc kubenswrapper[4667]: I0131 04:24:55.246465 4667 scope.go:117] "RemoveContainer" containerID="ccf83f0881df6e7a13ef5042d305e40a8397af1201167f32a06c6f69ba1f4883" Jan 31 04:24:55 crc kubenswrapper[4667]: I0131 04:24:55.257004 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tg2c4"] Jan 31 04:24:55 crc kubenswrapper[4667]: I0131 04:24:55.265999 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tg2c4"] Jan 31 04:24:55 crc kubenswrapper[4667]: I0131 04:24:55.272192 4667 scope.go:117] "RemoveContainer" containerID="2cdf41ed2c3eac400821d1776d364111e3682d3b158dc5a54fcac32f660fa715" Jan 31 04:24:55 crc kubenswrapper[4667]: I0131 04:24:55.286091 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mq7tf" Jan 31 04:24:55 crc kubenswrapper[4667]: I0131 04:24:55.307028 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56e8c969-d735-4dce-9435-b64960e2805b" path="/var/lib/kubelet/pods/56e8c969-d735-4dce-9435-b64960e2805b/volumes" Jan 31 04:24:55 crc kubenswrapper[4667]: I0131 04:24:55.334941 4667 scope.go:117] "RemoveContainer" containerID="1d5285c657fe88f5cac0197c55d7f6239a5fb237048443d0aa4c45d889c23528" Jan 31 04:24:55 crc kubenswrapper[4667]: E0131 04:24:55.337275 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d5285c657fe88f5cac0197c55d7f6239a5fb237048443d0aa4c45d889c23528\": container with ID starting with 1d5285c657fe88f5cac0197c55d7f6239a5fb237048443d0aa4c45d889c23528 not found: ID does not exist" containerID="1d5285c657fe88f5cac0197c55d7f6239a5fb237048443d0aa4c45d889c23528" Jan 31 04:24:55 crc kubenswrapper[4667]: I0131 04:24:55.337339 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d5285c657fe88f5cac0197c55d7f6239a5fb237048443d0aa4c45d889c23528"} err="failed to get container status \"1d5285c657fe88f5cac0197c55d7f6239a5fb237048443d0aa4c45d889c23528\": rpc error: code = NotFound desc = could not find container \"1d5285c657fe88f5cac0197c55d7f6239a5fb237048443d0aa4c45d889c23528\": container with ID starting with 1d5285c657fe88f5cac0197c55d7f6239a5fb237048443d0aa4c45d889c23528 not found: ID does not exist" Jan 31 04:24:55 crc kubenswrapper[4667]: I0131 04:24:55.337405 4667 scope.go:117] "RemoveContainer" containerID="ccf83f0881df6e7a13ef5042d305e40a8397af1201167f32a06c6f69ba1f4883" Jan 31 04:24:55 crc kubenswrapper[4667]: E0131 04:24:55.338252 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccf83f0881df6e7a13ef5042d305e40a8397af1201167f32a06c6f69ba1f4883\": container with ID starting with ccf83f0881df6e7a13ef5042d305e40a8397af1201167f32a06c6f69ba1f4883 not found: ID does not exist" containerID="ccf83f0881df6e7a13ef5042d305e40a8397af1201167f32a06c6f69ba1f4883" Jan 31 04:24:55 crc kubenswrapper[4667]: I0131 04:24:55.338291 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccf83f0881df6e7a13ef5042d305e40a8397af1201167f32a06c6f69ba1f4883"} err="failed to get container status \"ccf83f0881df6e7a13ef5042d305e40a8397af1201167f32a06c6f69ba1f4883\": rpc error: code = NotFound desc = could not find container \"ccf83f0881df6e7a13ef5042d305e40a8397af1201167f32a06c6f69ba1f4883\": container with ID starting with ccf83f0881df6e7a13ef5042d305e40a8397af1201167f32a06c6f69ba1f4883 not found: ID does not exist" Jan 31 04:24:55 crc kubenswrapper[4667]: I0131 04:24:55.338316 4667 scope.go:117] "RemoveContainer" containerID="2cdf41ed2c3eac400821d1776d364111e3682d3b158dc5a54fcac32f660fa715" Jan 31 04:24:55 crc kubenswrapper[4667]: E0131 04:24:55.338921 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cdf41ed2c3eac400821d1776d364111e3682d3b158dc5a54fcac32f660fa715\": container with ID starting with 2cdf41ed2c3eac400821d1776d364111e3682d3b158dc5a54fcac32f660fa715 not found: ID does not exist" containerID="2cdf41ed2c3eac400821d1776d364111e3682d3b158dc5a54fcac32f660fa715" Jan 31 04:24:55 crc kubenswrapper[4667]: I0131 04:24:55.338954 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cdf41ed2c3eac400821d1776d364111e3682d3b158dc5a54fcac32f660fa715"} err="failed to get container status \"2cdf41ed2c3eac400821d1776d364111e3682d3b158dc5a54fcac32f660fa715\": rpc error: code = NotFound desc = could not find container \"2cdf41ed2c3eac400821d1776d364111e3682d3b158dc5a54fcac32f660fa715\": container with ID starting with 2cdf41ed2c3eac400821d1776d364111e3682d3b158dc5a54fcac32f660fa715 not found: ID does not exist" Jan 31 04:24:56 crc kubenswrapper[4667]: I0131 04:24:56.528549 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mq7tf"] Jan 31 04:24:57 crc kubenswrapper[4667]: I0131 04:24:57.227152 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mq7tf" podUID="08d5db9a-ffda-439f-a9cf-ff6b06044900" containerName="registry-server" containerID="cri-o://75773376e9e27acbd5c483019dc64fb26630e4cdcaa6213d200828666bc4bbd2" gracePeriod=2 Jan 31 04:24:58 crc kubenswrapper[4667]: I0131 04:24:58.244062 4667 generic.go:334] "Generic (PLEG): container finished" podID="08d5db9a-ffda-439f-a9cf-ff6b06044900" containerID="75773376e9e27acbd5c483019dc64fb26630e4cdcaa6213d200828666bc4bbd2" exitCode=0 Jan 31 04:24:58 crc kubenswrapper[4667]: I0131 04:24:58.244121 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mq7tf" event={"ID":"08d5db9a-ffda-439f-a9cf-ff6b06044900","Type":"ContainerDied","Data":"75773376e9e27acbd5c483019dc64fb26630e4cdcaa6213d200828666bc4bbd2"} Jan 31 04:24:58 crc kubenswrapper[4667]: I0131 04:24:58.782793 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mq7tf" Jan 31 04:24:58 crc kubenswrapper[4667]: I0131 04:24:58.797958 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08d5db9a-ffda-439f-a9cf-ff6b06044900-catalog-content\") pod \"08d5db9a-ffda-439f-a9cf-ff6b06044900\" (UID: \"08d5db9a-ffda-439f-a9cf-ff6b06044900\") " Jan 31 04:24:58 crc kubenswrapper[4667]: I0131 04:24:58.807361 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlgnv\" (UniqueName: \"kubernetes.io/projected/08d5db9a-ffda-439f-a9cf-ff6b06044900-kube-api-access-zlgnv\") pod \"08d5db9a-ffda-439f-a9cf-ff6b06044900\" (UID: \"08d5db9a-ffda-439f-a9cf-ff6b06044900\") " Jan 31 04:24:58 crc kubenswrapper[4667]: I0131 04:24:58.807722 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08d5db9a-ffda-439f-a9cf-ff6b06044900-utilities\") pod \"08d5db9a-ffda-439f-a9cf-ff6b06044900\" (UID: \"08d5db9a-ffda-439f-a9cf-ff6b06044900\") " Jan 31 04:24:58 crc kubenswrapper[4667]: I0131 04:24:58.810494 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08d5db9a-ffda-439f-a9cf-ff6b06044900-utilities" (OuterVolumeSpecName: "utilities") pod "08d5db9a-ffda-439f-a9cf-ff6b06044900" (UID: "08d5db9a-ffda-439f-a9cf-ff6b06044900"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:24:58 crc kubenswrapper[4667]: I0131 04:24:58.830087 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08d5db9a-ffda-439f-a9cf-ff6b06044900-kube-api-access-zlgnv" (OuterVolumeSpecName: "kube-api-access-zlgnv") pod "08d5db9a-ffda-439f-a9cf-ff6b06044900" (UID: "08d5db9a-ffda-439f-a9cf-ff6b06044900"). InnerVolumeSpecName "kube-api-access-zlgnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:24:58 crc kubenswrapper[4667]: I0131 04:24:58.856296 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08d5db9a-ffda-439f-a9cf-ff6b06044900-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "08d5db9a-ffda-439f-a9cf-ff6b06044900" (UID: "08d5db9a-ffda-439f-a9cf-ff6b06044900"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:24:58 crc kubenswrapper[4667]: I0131 04:24:58.911556 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlgnv\" (UniqueName: \"kubernetes.io/projected/08d5db9a-ffda-439f-a9cf-ff6b06044900-kube-api-access-zlgnv\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:58 crc kubenswrapper[4667]: I0131 04:24:58.911615 4667 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08d5db9a-ffda-439f-a9cf-ff6b06044900-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:58 crc kubenswrapper[4667]: I0131 04:24:58.911633 4667 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08d5db9a-ffda-439f-a9cf-ff6b06044900-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:24:59 crc kubenswrapper[4667]: I0131 04:24:59.260565 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mq7tf" event={"ID":"08d5db9a-ffda-439f-a9cf-ff6b06044900","Type":"ContainerDied","Data":"16fbb44dbad1f18620617e4d4576563f1a9bf4a1ea10a8dbf585c16dffe6e791"} Jan 31 04:24:59 crc kubenswrapper[4667]: I0131 04:24:59.261168 4667 scope.go:117] "RemoveContainer" containerID="75773376e9e27acbd5c483019dc64fb26630e4cdcaa6213d200828666bc4bbd2" Jan 31 04:24:59 crc kubenswrapper[4667]: I0131 04:24:59.260771 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mq7tf" Jan 31 04:24:59 crc kubenswrapper[4667]: I0131 04:24:59.263783 4667 generic.go:334] "Generic (PLEG): container finished" podID="92bb44a8-6936-4c3f-96f6-b9572d90574d" containerID="77f59a11d135a2f4bd8b1700a213c915460f5addce7a87c4ca389d8d681213a3" exitCode=0 Jan 31 04:24:59 crc kubenswrapper[4667]: I0131 04:24:59.263826 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb" event={"ID":"92bb44a8-6936-4c3f-96f6-b9572d90574d","Type":"ContainerDied","Data":"77f59a11d135a2f4bd8b1700a213c915460f5addce7a87c4ca389d8d681213a3"} Jan 31 04:24:59 crc kubenswrapper[4667]: I0131 04:24:59.307549 4667 scope.go:117] "RemoveContainer" containerID="f833e3e70974c19a840e98c930a7025d0f7349462bcac9b07c5556f334462952" Jan 31 04:24:59 crc kubenswrapper[4667]: I0131 04:24:59.345676 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mq7tf"] Jan 31 04:24:59 crc kubenswrapper[4667]: I0131 04:24:59.355788 4667 scope.go:117] "RemoveContainer" containerID="d15ee424e0ed04c9e99adc36335e24e4fb1344db8ec502eb3be208dbf673a7c4" Jan 31 04:24:59 crc kubenswrapper[4667]: I0131 04:24:59.368148 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mq7tf"] Jan 31 04:25:00 crc kubenswrapper[4667]: I0131 04:25:00.833431 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb" Jan 31 04:25:00 crc kubenswrapper[4667]: I0131 04:25:00.865729 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92bb44a8-6936-4c3f-96f6-b9572d90574d-ssh-key-openstack-edpm-ipam\") pod \"92bb44a8-6936-4c3f-96f6-b9572d90574d\" (UID: \"92bb44a8-6936-4c3f-96f6-b9572d90574d\") " Jan 31 04:25:00 crc kubenswrapper[4667]: I0131 04:25:00.865820 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjfbf\" (UniqueName: \"kubernetes.io/projected/92bb44a8-6936-4c3f-96f6-b9572d90574d-kube-api-access-tjfbf\") pod \"92bb44a8-6936-4c3f-96f6-b9572d90574d\" (UID: \"92bb44a8-6936-4c3f-96f6-b9572d90574d\") " Jan 31 04:25:00 crc kubenswrapper[4667]: I0131 04:25:00.865885 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92bb44a8-6936-4c3f-96f6-b9572d90574d-inventory\") pod \"92bb44a8-6936-4c3f-96f6-b9572d90574d\" (UID: \"92bb44a8-6936-4c3f-96f6-b9572d90574d\") " Jan 31 04:25:00 crc kubenswrapper[4667]: I0131 04:25:00.865942 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92bb44a8-6936-4c3f-96f6-b9572d90574d-nova-metadata-neutron-config-0\") pod \"92bb44a8-6936-4c3f-96f6-b9572d90574d\" (UID: \"92bb44a8-6936-4c3f-96f6-b9572d90574d\") " Jan 31 04:25:00 crc kubenswrapper[4667]: I0131 04:25:00.866727 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92bb44a8-6936-4c3f-96f6-b9572d90574d-neutron-metadata-combined-ca-bundle\") pod \"92bb44a8-6936-4c3f-96f6-b9572d90574d\" (UID: \"92bb44a8-6936-4c3f-96f6-b9572d90574d\") " Jan 31 04:25:00 crc kubenswrapper[4667]: I0131 04:25:00.866806 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92bb44a8-6936-4c3f-96f6-b9572d90574d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"92bb44a8-6936-4c3f-96f6-b9572d90574d\" (UID: \"92bb44a8-6936-4c3f-96f6-b9572d90574d\") " Jan 31 04:25:00 crc kubenswrapper[4667]: I0131 04:25:00.887604 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92bb44a8-6936-4c3f-96f6-b9572d90574d-kube-api-access-tjfbf" (OuterVolumeSpecName: "kube-api-access-tjfbf") pod "92bb44a8-6936-4c3f-96f6-b9572d90574d" (UID: "92bb44a8-6936-4c3f-96f6-b9572d90574d"). InnerVolumeSpecName "kube-api-access-tjfbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:25:00 crc kubenswrapper[4667]: I0131 04:25:00.892379 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92bb44a8-6936-4c3f-96f6-b9572d90574d-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "92bb44a8-6936-4c3f-96f6-b9572d90574d" (UID: "92bb44a8-6936-4c3f-96f6-b9572d90574d"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:25:00 crc kubenswrapper[4667]: I0131 04:25:00.902232 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92bb44a8-6936-4c3f-96f6-b9572d90574d-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "92bb44a8-6936-4c3f-96f6-b9572d90574d" (UID: "92bb44a8-6936-4c3f-96f6-b9572d90574d"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:25:00 crc kubenswrapper[4667]: I0131 04:25:00.906187 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92bb44a8-6936-4c3f-96f6-b9572d90574d-inventory" (OuterVolumeSpecName: "inventory") pod "92bb44a8-6936-4c3f-96f6-b9572d90574d" (UID: "92bb44a8-6936-4c3f-96f6-b9572d90574d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:25:00 crc kubenswrapper[4667]: I0131 04:25:00.913441 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92bb44a8-6936-4c3f-96f6-b9572d90574d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "92bb44a8-6936-4c3f-96f6-b9572d90574d" (UID: "92bb44a8-6936-4c3f-96f6-b9572d90574d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:25:00 crc kubenswrapper[4667]: I0131 04:25:00.933624 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92bb44a8-6936-4c3f-96f6-b9572d90574d-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "92bb44a8-6936-4c3f-96f6-b9572d90574d" (UID: "92bb44a8-6936-4c3f-96f6-b9572d90574d"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:25:00 crc kubenswrapper[4667]: I0131 04:25:00.972212 4667 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92bb44a8-6936-4c3f-96f6-b9572d90574d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:25:00 crc kubenswrapper[4667]: I0131 04:25:00.972244 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjfbf\" (UniqueName: \"kubernetes.io/projected/92bb44a8-6936-4c3f-96f6-b9572d90574d-kube-api-access-tjfbf\") on node \"crc\" DevicePath \"\"" Jan 31 04:25:00 crc kubenswrapper[4667]: I0131 04:25:00.972257 4667 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92bb44a8-6936-4c3f-96f6-b9572d90574d-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 04:25:00 crc kubenswrapper[4667]: I0131 04:25:00.972267 4667 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92bb44a8-6936-4c3f-96f6-b9572d90574d-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 31 04:25:00 crc kubenswrapper[4667]: I0131 04:25:00.972279 4667 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92bb44a8-6936-4c3f-96f6-b9572d90574d-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:25:00 crc kubenswrapper[4667]: I0131 04:25:00.972291 4667 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/92bb44a8-6936-4c3f-96f6-b9572d90574d-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 31 04:25:01 crc kubenswrapper[4667]: I0131 04:25:01.293100 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb" Jan 31 04:25:01 crc kubenswrapper[4667]: I0131 04:25:01.296814 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08d5db9a-ffda-439f-a9cf-ff6b06044900" path="/var/lib/kubelet/pods/08d5db9a-ffda-439f-a9cf-ff6b06044900/volumes" Jan 31 04:25:01 crc kubenswrapper[4667]: I0131 04:25:01.299085 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb" event={"ID":"92bb44a8-6936-4c3f-96f6-b9572d90574d","Type":"ContainerDied","Data":"09b6b6be778997b1146b4299c6ece8aa19f92e75b3729062d448e16497599305"} Jan 31 04:25:01 crc kubenswrapper[4667]: I0131 04:25:01.299306 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09b6b6be778997b1146b4299c6ece8aa19f92e75b3729062d448e16497599305" Jan 31 04:25:01 crc kubenswrapper[4667]: I0131 04:25:01.440114 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dtp9k"] Jan 31 04:25:01 crc kubenswrapper[4667]: E0131 04:25:01.440668 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08d5db9a-ffda-439f-a9cf-ff6b06044900" containerName="extract-content" Jan 31 04:25:01 crc kubenswrapper[4667]: I0131 04:25:01.440695 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="08d5db9a-ffda-439f-a9cf-ff6b06044900" containerName="extract-content" Jan 31 04:25:01 crc kubenswrapper[4667]: E0131 04:25:01.440721 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08d5db9a-ffda-439f-a9cf-ff6b06044900" containerName="extract-utilities" Jan 31 04:25:01 crc kubenswrapper[4667]: I0131 04:25:01.440733 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="08d5db9a-ffda-439f-a9cf-ff6b06044900" containerName="extract-utilities" Jan 31 04:25:01 crc kubenswrapper[4667]: E0131 04:25:01.440761 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92bb44a8-6936-4c3f-96f6-b9572d90574d" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 31 04:25:01 crc kubenswrapper[4667]: I0131 04:25:01.440773 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="92bb44a8-6936-4c3f-96f6-b9572d90574d" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 31 04:25:01 crc kubenswrapper[4667]: E0131 04:25:01.440799 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56e8c969-d735-4dce-9435-b64960e2805b" containerName="extract-content" Jan 31 04:25:01 crc kubenswrapper[4667]: I0131 04:25:01.440810 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="56e8c969-d735-4dce-9435-b64960e2805b" containerName="extract-content" Jan 31 04:25:01 crc kubenswrapper[4667]: E0131 04:25:01.440836 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08d5db9a-ffda-439f-a9cf-ff6b06044900" containerName="registry-server" Jan 31 04:25:01 crc kubenswrapper[4667]: I0131 04:25:01.440870 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="08d5db9a-ffda-439f-a9cf-ff6b06044900" containerName="registry-server" Jan 31 04:25:01 crc kubenswrapper[4667]: E0131 04:25:01.440884 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56e8c969-d735-4dce-9435-b64960e2805b" containerName="registry-server" Jan 31 04:25:01 crc kubenswrapper[4667]: I0131 04:25:01.440894 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="56e8c969-d735-4dce-9435-b64960e2805b" containerName="registry-server" Jan 31 04:25:01 crc kubenswrapper[4667]: E0131 04:25:01.440917 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56e8c969-d735-4dce-9435-b64960e2805b" containerName="extract-utilities" Jan 31 04:25:01 crc kubenswrapper[4667]: I0131 04:25:01.440931 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="56e8c969-d735-4dce-9435-b64960e2805b" containerName="extract-utilities" Jan 31 04:25:01 crc kubenswrapper[4667]: I0131 04:25:01.441226 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="08d5db9a-ffda-439f-a9cf-ff6b06044900" containerName="registry-server" Jan 31 04:25:01 crc kubenswrapper[4667]: I0131 04:25:01.441278 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="92bb44a8-6936-4c3f-96f6-b9572d90574d" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 31 04:25:01 crc kubenswrapper[4667]: I0131 04:25:01.441303 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="56e8c969-d735-4dce-9435-b64960e2805b" containerName="registry-server" Jan 31 04:25:01 crc kubenswrapper[4667]: I0131 04:25:01.444522 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dtp9k" Jan 31 04:25:01 crc kubenswrapper[4667]: I0131 04:25:01.447696 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 04:25:01 crc kubenswrapper[4667]: I0131 04:25:01.447873 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z7p2q" Jan 31 04:25:01 crc kubenswrapper[4667]: I0131 04:25:01.448212 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 04:25:01 crc kubenswrapper[4667]: I0131 04:25:01.450773 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 31 04:25:01 crc kubenswrapper[4667]: I0131 04:25:01.451106 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 04:25:01 crc kubenswrapper[4667]: I0131 04:25:01.477252 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dtp9k"] Jan 31 04:25:01 crc kubenswrapper[4667]: I0131 04:25:01.484793 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxbcv\" (UniqueName: \"kubernetes.io/projected/a8376acd-0ea2-4ac1-a843-59932a976b4e-kube-api-access-vxbcv\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dtp9k\" (UID: \"a8376acd-0ea2-4ac1-a843-59932a976b4e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dtp9k" Jan 31 04:25:01 crc kubenswrapper[4667]: I0131 04:25:01.484941 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a8376acd-0ea2-4ac1-a843-59932a976b4e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dtp9k\" (UID: \"a8376acd-0ea2-4ac1-a843-59932a976b4e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dtp9k" Jan 31 04:25:01 crc kubenswrapper[4667]: I0131 04:25:01.485014 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8376acd-0ea2-4ac1-a843-59932a976b4e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dtp9k\" (UID: \"a8376acd-0ea2-4ac1-a843-59932a976b4e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dtp9k" Jan 31 04:25:01 crc kubenswrapper[4667]: I0131 04:25:01.485059 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8376acd-0ea2-4ac1-a843-59932a976b4e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dtp9k\" (UID: \"a8376acd-0ea2-4ac1-a843-59932a976b4e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dtp9k" Jan 31 04:25:01 crc kubenswrapper[4667]: I0131 04:25:01.485157 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8376acd-0ea2-4ac1-a843-59932a976b4e-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dtp9k\" (UID: \"a8376acd-0ea2-4ac1-a843-59932a976b4e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dtp9k" Jan 31 04:25:01 crc kubenswrapper[4667]: I0131 04:25:01.587280 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8376acd-0ea2-4ac1-a843-59932a976b4e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dtp9k\" (UID: \"a8376acd-0ea2-4ac1-a843-59932a976b4e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dtp9k" Jan 31 04:25:01 crc kubenswrapper[4667]: I0131 04:25:01.587714 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8376acd-0ea2-4ac1-a843-59932a976b4e-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dtp9k\" (UID: \"a8376acd-0ea2-4ac1-a843-59932a976b4e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dtp9k" Jan 31 04:25:01 crc kubenswrapper[4667]: I0131 04:25:01.587856 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxbcv\" (UniqueName: \"kubernetes.io/projected/a8376acd-0ea2-4ac1-a843-59932a976b4e-kube-api-access-vxbcv\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dtp9k\" (UID: \"a8376acd-0ea2-4ac1-a843-59932a976b4e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dtp9k" Jan 31 04:25:01 crc kubenswrapper[4667]: I0131 04:25:01.587987 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a8376acd-0ea2-4ac1-a843-59932a976b4e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dtp9k\" (UID: \"a8376acd-0ea2-4ac1-a843-59932a976b4e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dtp9k" Jan 31 04:25:01 crc kubenswrapper[4667]: I0131 04:25:01.588107 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8376acd-0ea2-4ac1-a843-59932a976b4e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dtp9k\" (UID: \"a8376acd-0ea2-4ac1-a843-59932a976b4e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dtp9k" Jan 31 04:25:01 crc kubenswrapper[4667]: I0131 04:25:01.594203 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8376acd-0ea2-4ac1-a843-59932a976b4e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dtp9k\" (UID: \"a8376acd-0ea2-4ac1-a843-59932a976b4e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dtp9k" Jan 31 04:25:01 crc kubenswrapper[4667]: I0131 04:25:01.594521 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8376acd-0ea2-4ac1-a843-59932a976b4e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dtp9k\" (UID: \"a8376acd-0ea2-4ac1-a843-59932a976b4e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dtp9k" Jan 31 04:25:01 crc kubenswrapper[4667]: I0131 04:25:01.594613 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8376acd-0ea2-4ac1-a843-59932a976b4e-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dtp9k\" (UID: \"a8376acd-0ea2-4ac1-a843-59932a976b4e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dtp9k" Jan 31 04:25:01 crc kubenswrapper[4667]: I0131 04:25:01.595061 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a8376acd-0ea2-4ac1-a843-59932a976b4e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dtp9k\" (UID: \"a8376acd-0ea2-4ac1-a843-59932a976b4e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dtp9k" Jan 31 04:25:01 crc kubenswrapper[4667]: I0131 04:25:01.611249 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxbcv\" (UniqueName: \"kubernetes.io/projected/a8376acd-0ea2-4ac1-a843-59932a976b4e-kube-api-access-vxbcv\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dtp9k\" (UID: \"a8376acd-0ea2-4ac1-a843-59932a976b4e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dtp9k" Jan 31 04:25:01 crc kubenswrapper[4667]: I0131 04:25:01.774178 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dtp9k" Jan 31 04:25:02 crc kubenswrapper[4667]: W0131 04:25:02.433257 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8376acd_0ea2_4ac1_a843_59932a976b4e.slice/crio-daf5dce6619f7bede670a22b23c98fbe17e4450a5da5189063499689e7c2c0e0 WatchSource:0}: Error finding container daf5dce6619f7bede670a22b23c98fbe17e4450a5da5189063499689e7c2c0e0: Status 404 returned error can't find the container with id daf5dce6619f7bede670a22b23c98fbe17e4450a5da5189063499689e7c2c0e0 Jan 31 04:25:02 crc kubenswrapper[4667]: I0131 04:25:02.439684 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dtp9k"] Jan 31 04:25:02 crc kubenswrapper[4667]: E0131 04:25:02.857069 4667 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08d5db9a_ffda_439f_a9cf_ff6b06044900.slice/crio-75773376e9e27acbd5c483019dc64fb26630e4cdcaa6213d200828666bc4bbd2.scope\": RecentStats: unable to find data in memory cache]" Jan 31 04:25:03 crc kubenswrapper[4667]: I0131 04:25:03.315374 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dtp9k" event={"ID":"a8376acd-0ea2-4ac1-a843-59932a976b4e","Type":"ContainerStarted","Data":"981456e21ffa8eae3ebff5cd23aeee25f439c342e67444405a5cf5e2ab8c5824"} Jan 31 04:25:03 crc kubenswrapper[4667]: I0131 04:25:03.316155 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dtp9k" event={"ID":"a8376acd-0ea2-4ac1-a843-59932a976b4e","Type":"ContainerStarted","Data":"daf5dce6619f7bede670a22b23c98fbe17e4450a5da5189063499689e7c2c0e0"} Jan 31 04:25:03 crc kubenswrapper[4667]: I0131 04:25:03.340916 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dtp9k" podStartSLOduration=1.898580403 podStartE2EDuration="2.340897209s" podCreationTimestamp="2026-01-31 04:25:01 +0000 UTC" firstStartedPulling="2026-01-31 04:25:02.436974246 +0000 UTC m=+2225.953309545" lastFinishedPulling="2026-01-31 04:25:02.879291052 +0000 UTC m=+2226.395626351" observedRunningTime="2026-01-31 04:25:03.337305344 +0000 UTC m=+2226.853640643" watchObservedRunningTime="2026-01-31 04:25:03.340897209 +0000 UTC m=+2226.857232518" Jan 31 04:25:13 crc kubenswrapper[4667]: E0131 04:25:13.119303 4667 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08d5db9a_ffda_439f_a9cf_ff6b06044900.slice/crio-75773376e9e27acbd5c483019dc64fb26630e4cdcaa6213d200828666bc4bbd2.scope\": RecentStats: unable to find data in memory cache]" Jan 31 04:25:23 crc kubenswrapper[4667]: E0131 04:25:23.398590 4667 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08d5db9a_ffda_439f_a9cf_ff6b06044900.slice/crio-75773376e9e27acbd5c483019dc64fb26630e4cdcaa6213d200828666bc4bbd2.scope\": RecentStats: unable to find data in memory cache]" Jan 31 04:25:33 crc kubenswrapper[4667]: E0131 04:25:33.635180 4667 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08d5db9a_ffda_439f_a9cf_ff6b06044900.slice/crio-75773376e9e27acbd5c483019dc64fb26630e4cdcaa6213d200828666bc4bbd2.scope\": RecentStats: unable to find data in memory cache]" Jan 31 04:25:43 crc kubenswrapper[4667]: E0131 04:25:43.880743 4667 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08d5db9a_ffda_439f_a9cf_ff6b06044900.slice/crio-75773376e9e27acbd5c483019dc64fb26630e4cdcaa6213d200828666bc4bbd2.scope\": RecentStats: unable to find data in memory cache]" Jan 31 04:25:54 crc kubenswrapper[4667]: E0131 04:25:54.179050 4667 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08d5db9a_ffda_439f_a9cf_ff6b06044900.slice/crio-75773376e9e27acbd5c483019dc64fb26630e4cdcaa6213d200828666bc4bbd2.scope\": RecentStats: unable to find data in memory cache]" Jan 31 04:27:15 crc kubenswrapper[4667]: I0131 04:27:15.705257 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:27:15 crc kubenswrapper[4667]: I0131 04:27:15.706124 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:27:45 crc kubenswrapper[4667]: I0131 04:27:45.704695 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:27:45 crc kubenswrapper[4667]: I0131 04:27:45.705486 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:28:15 crc kubenswrapper[4667]: I0131 04:28:15.704640 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:28:15 crc kubenswrapper[4667]: I0131 04:28:15.705555 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:28:15 crc kubenswrapper[4667]: I0131 04:28:15.705624 4667 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" Jan 31 04:28:15 crc kubenswrapper[4667]: I0131 04:28:15.706711 4667 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c179b2f38e008b6a7310f1984b183ee74fb222f8cc8019eb62046e0a9a89867f"} pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 04:28:15 crc kubenswrapper[4667]: I0131 04:28:15.706775 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" containerID="cri-o://c179b2f38e008b6a7310f1984b183ee74fb222f8cc8019eb62046e0a9a89867f" gracePeriod=600 Jan 31 04:28:15 crc kubenswrapper[4667]: E0131 04:28:15.844457 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:28:16 crc kubenswrapper[4667]: I0131 04:28:16.509345 4667 generic.go:334] "Generic (PLEG): container finished" podID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerID="c179b2f38e008b6a7310f1984b183ee74fb222f8cc8019eb62046e0a9a89867f" exitCode=0 Jan 31 04:28:16 crc kubenswrapper[4667]: I0131 04:28:16.509446 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" event={"ID":"b103bbd2-fb5d-4b2a-8b01-c32f699757df","Type":"ContainerDied","Data":"c179b2f38e008b6a7310f1984b183ee74fb222f8cc8019eb62046e0a9a89867f"} Jan 31 04:28:16 crc kubenswrapper[4667]: I0131 04:28:16.509563 4667 scope.go:117] "RemoveContainer" containerID="ef1eb767bd280ddd0c820466d361f32d61a4b5810e115e81b09dba157f785aad" Jan 31 04:28:16 crc kubenswrapper[4667]: I0131 04:28:16.510306 4667 scope.go:117] "RemoveContainer" containerID="c179b2f38e008b6a7310f1984b183ee74fb222f8cc8019eb62046e0a9a89867f" Jan 31 04:28:16 crc kubenswrapper[4667]: E0131 04:28:16.510645 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:28:28 crc kubenswrapper[4667]: I0131 04:28:28.284433 4667 scope.go:117] "RemoveContainer" containerID="c179b2f38e008b6a7310f1984b183ee74fb222f8cc8019eb62046e0a9a89867f" Jan 31 04:28:28 crc kubenswrapper[4667]: E0131 04:28:28.285579 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:28:39 crc kubenswrapper[4667]: I0131 04:28:39.282198 4667 scope.go:117] "RemoveContainer" containerID="c179b2f38e008b6a7310f1984b183ee74fb222f8cc8019eb62046e0a9a89867f" Jan 31 04:28:39 crc kubenswrapper[4667]: E0131 04:28:39.283719 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:28:50 crc kubenswrapper[4667]: I0131 04:28:50.281986 4667 scope.go:117] "RemoveContainer" containerID="c179b2f38e008b6a7310f1984b183ee74fb222f8cc8019eb62046e0a9a89867f" Jan 31 04:28:50 crc kubenswrapper[4667]: E0131 04:28:50.285168 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:29:04 crc kubenswrapper[4667]: I0131 04:29:04.058423 4667 generic.go:334] "Generic (PLEG): container finished" podID="a8376acd-0ea2-4ac1-a843-59932a976b4e" containerID="981456e21ffa8eae3ebff5cd23aeee25f439c342e67444405a5cf5e2ab8c5824" exitCode=0 Jan 31 04:29:04 crc kubenswrapper[4667]: I0131 04:29:04.058603 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dtp9k" event={"ID":"a8376acd-0ea2-4ac1-a843-59932a976b4e","Type":"ContainerDied","Data":"981456e21ffa8eae3ebff5cd23aeee25f439c342e67444405a5cf5e2ab8c5824"} Jan 31 04:29:05 crc kubenswrapper[4667]: I0131 04:29:05.285229 4667 scope.go:117] "RemoveContainer" containerID="c179b2f38e008b6a7310f1984b183ee74fb222f8cc8019eb62046e0a9a89867f" Jan 31 04:29:05 crc kubenswrapper[4667]: E0131 04:29:05.285936 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:29:05 crc kubenswrapper[4667]: I0131 04:29:05.522187 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dtp9k" Jan 31 04:29:05 crc kubenswrapper[4667]: I0131 04:29:05.634442 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a8376acd-0ea2-4ac1-a843-59932a976b4e-libvirt-secret-0\") pod \"a8376acd-0ea2-4ac1-a843-59932a976b4e\" (UID: \"a8376acd-0ea2-4ac1-a843-59932a976b4e\") " Jan 31 04:29:05 crc kubenswrapper[4667]: I0131 04:29:05.634789 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxbcv\" (UniqueName: \"kubernetes.io/projected/a8376acd-0ea2-4ac1-a843-59932a976b4e-kube-api-access-vxbcv\") pod \"a8376acd-0ea2-4ac1-a843-59932a976b4e\" (UID: \"a8376acd-0ea2-4ac1-a843-59932a976b4e\") " Jan 31 04:29:05 crc kubenswrapper[4667]: I0131 04:29:05.634857 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8376acd-0ea2-4ac1-a843-59932a976b4e-ssh-key-openstack-edpm-ipam\") pod \"a8376acd-0ea2-4ac1-a843-59932a976b4e\" (UID: \"a8376acd-0ea2-4ac1-a843-59932a976b4e\") " Jan 31 04:29:05 crc kubenswrapper[4667]: I0131 04:29:05.634916 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8376acd-0ea2-4ac1-a843-59932a976b4e-libvirt-combined-ca-bundle\") pod \"a8376acd-0ea2-4ac1-a843-59932a976b4e\" (UID: \"a8376acd-0ea2-4ac1-a843-59932a976b4e\") " Jan 31 04:29:05 crc kubenswrapper[4667]: I0131 04:29:05.634970 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8376acd-0ea2-4ac1-a843-59932a976b4e-inventory\") pod \"a8376acd-0ea2-4ac1-a843-59932a976b4e\" (UID: \"a8376acd-0ea2-4ac1-a843-59932a976b4e\") " Jan 31 04:29:05 crc kubenswrapper[4667]: I0131 04:29:05.643369 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8376acd-0ea2-4ac1-a843-59932a976b4e-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "a8376acd-0ea2-4ac1-a843-59932a976b4e" (UID: "a8376acd-0ea2-4ac1-a843-59932a976b4e"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:29:05 crc kubenswrapper[4667]: I0131 04:29:05.650050 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8376acd-0ea2-4ac1-a843-59932a976b4e-kube-api-access-vxbcv" (OuterVolumeSpecName: "kube-api-access-vxbcv") pod "a8376acd-0ea2-4ac1-a843-59932a976b4e" (UID: "a8376acd-0ea2-4ac1-a843-59932a976b4e"). InnerVolumeSpecName "kube-api-access-vxbcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:29:05 crc kubenswrapper[4667]: I0131 04:29:05.668157 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8376acd-0ea2-4ac1-a843-59932a976b4e-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "a8376acd-0ea2-4ac1-a843-59932a976b4e" (UID: "a8376acd-0ea2-4ac1-a843-59932a976b4e"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:29:05 crc kubenswrapper[4667]: I0131 04:29:05.678502 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8376acd-0ea2-4ac1-a843-59932a976b4e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a8376acd-0ea2-4ac1-a843-59932a976b4e" (UID: "a8376acd-0ea2-4ac1-a843-59932a976b4e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:29:05 crc kubenswrapper[4667]: I0131 04:29:05.690017 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8376acd-0ea2-4ac1-a843-59932a976b4e-inventory" (OuterVolumeSpecName: "inventory") pod "a8376acd-0ea2-4ac1-a843-59932a976b4e" (UID: "a8376acd-0ea2-4ac1-a843-59932a976b4e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:29:05 crc kubenswrapper[4667]: I0131 04:29:05.739128 4667 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/a8376acd-0ea2-4ac1-a843-59932a976b4e-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 31 04:29:05 crc kubenswrapper[4667]: I0131 04:29:05.739169 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxbcv\" (UniqueName: \"kubernetes.io/projected/a8376acd-0ea2-4ac1-a843-59932a976b4e-kube-api-access-vxbcv\") on node \"crc\" DevicePath \"\"" Jan 31 04:29:05 crc kubenswrapper[4667]: I0131 04:29:05.739180 4667 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8376acd-0ea2-4ac1-a843-59932a976b4e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:29:05 crc kubenswrapper[4667]: I0131 04:29:05.739192 4667 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8376acd-0ea2-4ac1-a843-59932a976b4e-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:29:05 crc kubenswrapper[4667]: I0131 04:29:05.739203 4667 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8376acd-0ea2-4ac1-a843-59932a976b4e-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 04:29:06 crc kubenswrapper[4667]: I0131 04:29:06.154420 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dtp9k" event={"ID":"a8376acd-0ea2-4ac1-a843-59932a976b4e","Type":"ContainerDied","Data":"daf5dce6619f7bede670a22b23c98fbe17e4450a5da5189063499689e7c2c0e0"} Jan 31 04:29:06 crc kubenswrapper[4667]: I0131 04:29:06.154470 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="daf5dce6619f7bede670a22b23c98fbe17e4450a5da5189063499689e7c2c0e0" Jan 31 04:29:06 crc kubenswrapper[4667]: I0131 04:29:06.154611 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dtp9k" Jan 31 04:29:06 crc kubenswrapper[4667]: I0131 04:29:06.277199 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-7swhs"] Jan 31 04:29:06 crc kubenswrapper[4667]: E0131 04:29:06.277607 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8376acd-0ea2-4ac1-a843-59932a976b4e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 31 04:29:06 crc kubenswrapper[4667]: I0131 04:29:06.277628 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8376acd-0ea2-4ac1-a843-59932a976b4e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 31 04:29:06 crc kubenswrapper[4667]: I0131 04:29:06.277900 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8376acd-0ea2-4ac1-a843-59932a976b4e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 31 04:29:06 crc kubenswrapper[4667]: I0131 04:29:06.278578 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7swhs" Jan 31 04:29:06 crc kubenswrapper[4667]: I0131 04:29:06.282610 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 04:29:06 crc kubenswrapper[4667]: I0131 04:29:06.283109 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 31 04:29:06 crc kubenswrapper[4667]: I0131 04:29:06.283300 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 04:29:06 crc kubenswrapper[4667]: I0131 04:29:06.285085 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z7p2q" Jan 31 04:29:06 crc kubenswrapper[4667]: I0131 04:29:06.285277 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 31 04:29:06 crc kubenswrapper[4667]: I0131 04:29:06.286914 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 04:29:06 crc kubenswrapper[4667]: I0131 04:29:06.294385 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 31 04:29:06 crc kubenswrapper[4667]: I0131 04:29:06.298464 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-7swhs"] Jan 31 04:29:06 crc kubenswrapper[4667]: I0131 04:29:06.359028 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8d2d3410-e5e4-4607-ab3c-74199d66293d-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7swhs\" (UID: \"8d2d3410-e5e4-4607-ab3c-74199d66293d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7swhs" Jan 31 04:29:06 crc kubenswrapper[4667]: I0131 04:29:06.359098 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8d2d3410-e5e4-4607-ab3c-74199d66293d-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7swhs\" (UID: \"8d2d3410-e5e4-4607-ab3c-74199d66293d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7swhs" Jan 31 04:29:06 crc kubenswrapper[4667]: I0131 04:29:06.359171 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8d2d3410-e5e4-4607-ab3c-74199d66293d-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7swhs\" (UID: \"8d2d3410-e5e4-4607-ab3c-74199d66293d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7swhs" Jan 31 04:29:06 crc kubenswrapper[4667]: I0131 04:29:06.359194 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8d2d3410-e5e4-4607-ab3c-74199d66293d-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7swhs\" (UID: \"8d2d3410-e5e4-4607-ab3c-74199d66293d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7swhs" Jan 31 04:29:06 crc kubenswrapper[4667]: I0131 04:29:06.359221 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8d2d3410-e5e4-4607-ab3c-74199d66293d-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7swhs\" (UID: \"8d2d3410-e5e4-4607-ab3c-74199d66293d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7swhs" Jan 31 04:29:06 crc kubenswrapper[4667]: I0131 04:29:06.359239 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7fcd\" (UniqueName: \"kubernetes.io/projected/8d2d3410-e5e4-4607-ab3c-74199d66293d-kube-api-access-g7fcd\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7swhs\" (UID: \"8d2d3410-e5e4-4607-ab3c-74199d66293d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7swhs" Jan 31 04:29:06 crc kubenswrapper[4667]: I0131 04:29:06.359271 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8d2d3410-e5e4-4607-ab3c-74199d66293d-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7swhs\" (UID: \"8d2d3410-e5e4-4607-ab3c-74199d66293d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7swhs" Jan 31 04:29:06 crc kubenswrapper[4667]: I0131 04:29:06.359299 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8d2d3410-e5e4-4607-ab3c-74199d66293d-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7swhs\" (UID: \"8d2d3410-e5e4-4607-ab3c-74199d66293d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7swhs" Jan 31 04:29:06 crc kubenswrapper[4667]: I0131 04:29:06.359323 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d2d3410-e5e4-4607-ab3c-74199d66293d-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7swhs\" (UID: \"8d2d3410-e5e4-4607-ab3c-74199d66293d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7swhs" Jan 31 04:29:06 crc kubenswrapper[4667]: I0131 04:29:06.460751 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8d2d3410-e5e4-4607-ab3c-74199d66293d-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7swhs\" (UID: \"8d2d3410-e5e4-4607-ab3c-74199d66293d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7swhs" Jan 31 04:29:06 crc kubenswrapper[4667]: I0131 04:29:06.460811 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8d2d3410-e5e4-4607-ab3c-74199d66293d-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7swhs\" (UID: \"8d2d3410-e5e4-4607-ab3c-74199d66293d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7swhs" Jan 31 04:29:06 crc kubenswrapper[4667]: I0131 04:29:06.460860 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8d2d3410-e5e4-4607-ab3c-74199d66293d-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7swhs\" (UID: \"8d2d3410-e5e4-4607-ab3c-74199d66293d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7swhs" Jan 31 04:29:06 crc kubenswrapper[4667]: I0131 04:29:06.460880 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7fcd\" (UniqueName: \"kubernetes.io/projected/8d2d3410-e5e4-4607-ab3c-74199d66293d-kube-api-access-g7fcd\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7swhs\" (UID: \"8d2d3410-e5e4-4607-ab3c-74199d66293d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7swhs" Jan 31 04:29:06 crc kubenswrapper[4667]: I0131 04:29:06.460915 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8d2d3410-e5e4-4607-ab3c-74199d66293d-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7swhs\" (UID: \"8d2d3410-e5e4-4607-ab3c-74199d66293d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7swhs" Jan 31 04:29:06 crc kubenswrapper[4667]: I0131 04:29:06.460947 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8d2d3410-e5e4-4607-ab3c-74199d66293d-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7swhs\" (UID: \"8d2d3410-e5e4-4607-ab3c-74199d66293d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7swhs" Jan 31 04:29:06 crc kubenswrapper[4667]: I0131 04:29:06.460980 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d2d3410-e5e4-4607-ab3c-74199d66293d-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7swhs\" (UID: \"8d2d3410-e5e4-4607-ab3c-74199d66293d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7swhs" Jan 31 04:29:06 crc kubenswrapper[4667]: I0131 04:29:06.461025 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8d2d3410-e5e4-4607-ab3c-74199d66293d-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7swhs\" (UID: \"8d2d3410-e5e4-4607-ab3c-74199d66293d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7swhs" Jan 31 04:29:06 crc kubenswrapper[4667]: I0131 04:29:06.461057 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8d2d3410-e5e4-4607-ab3c-74199d66293d-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7swhs\" (UID: \"8d2d3410-e5e4-4607-ab3c-74199d66293d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7swhs" Jan 31 04:29:06 crc kubenswrapper[4667]: I0131 04:29:06.462827 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8d2d3410-e5e4-4607-ab3c-74199d66293d-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7swhs\" (UID: \"8d2d3410-e5e4-4607-ab3c-74199d66293d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7swhs" Jan 31 04:29:06 crc kubenswrapper[4667]: I0131 04:29:06.467826 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d2d3410-e5e4-4607-ab3c-74199d66293d-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7swhs\" (UID: \"8d2d3410-e5e4-4607-ab3c-74199d66293d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7swhs" Jan 31 04:29:06 crc kubenswrapper[4667]: I0131 04:29:06.471543 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8d2d3410-e5e4-4607-ab3c-74199d66293d-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7swhs\" (UID: \"8d2d3410-e5e4-4607-ab3c-74199d66293d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7swhs" Jan 31 04:29:06 crc kubenswrapper[4667]: I0131 04:29:06.472528 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8d2d3410-e5e4-4607-ab3c-74199d66293d-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7swhs\" (UID: \"8d2d3410-e5e4-4607-ab3c-74199d66293d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7swhs" Jan 31 04:29:06 crc kubenswrapper[4667]: I0131 04:29:06.473390 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8d2d3410-e5e4-4607-ab3c-74199d66293d-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7swhs\" (UID: \"8d2d3410-e5e4-4607-ab3c-74199d66293d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7swhs" Jan 31 04:29:06 crc kubenswrapper[4667]: I0131 04:29:06.474357 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8d2d3410-e5e4-4607-ab3c-74199d66293d-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7swhs\" (UID: \"8d2d3410-e5e4-4607-ab3c-74199d66293d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7swhs" Jan 31 04:29:06 crc kubenswrapper[4667]: I0131 04:29:06.479455 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8d2d3410-e5e4-4607-ab3c-74199d66293d-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7swhs\" (UID: \"8d2d3410-e5e4-4607-ab3c-74199d66293d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7swhs" Jan 31 04:29:06 crc kubenswrapper[4667]: I0131 04:29:06.483719 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8d2d3410-e5e4-4607-ab3c-74199d66293d-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7swhs\" (UID: \"8d2d3410-e5e4-4607-ab3c-74199d66293d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7swhs" Jan 31 04:29:06 crc kubenswrapper[4667]: I0131 04:29:06.488431 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7fcd\" (UniqueName: \"kubernetes.io/projected/8d2d3410-e5e4-4607-ab3c-74199d66293d-kube-api-access-g7fcd\") pod \"nova-edpm-deployment-openstack-edpm-ipam-7swhs\" (UID: \"8d2d3410-e5e4-4607-ab3c-74199d66293d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7swhs" Jan 31 04:29:06 crc kubenswrapper[4667]: I0131 04:29:06.595898 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7swhs" Jan 31 04:29:07 crc kubenswrapper[4667]: I0131 04:29:07.276447 4667 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 04:29:07 crc kubenswrapper[4667]: I0131 04:29:07.302102 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-7swhs"] Jan 31 04:29:08 crc kubenswrapper[4667]: I0131 04:29:08.177029 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7swhs" event={"ID":"8d2d3410-e5e4-4607-ab3c-74199d66293d","Type":"ContainerStarted","Data":"6694105a5596cb5206ee1f9d6a848ab3acfa4e6eff1258bbe55eeb0293990cfa"} Jan 31 04:29:08 crc kubenswrapper[4667]: I0131 04:29:08.177941 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7swhs" event={"ID":"8d2d3410-e5e4-4607-ab3c-74199d66293d","Type":"ContainerStarted","Data":"60a4dcb5cc69dc87befe2d7a6cde8e312398267fca3ecf1326464660da7eafea"} Jan 31 04:29:08 crc kubenswrapper[4667]: I0131 04:29:08.206372 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7swhs" podStartSLOduration=1.75755596 podStartE2EDuration="2.206348546s" podCreationTimestamp="2026-01-31 04:29:06 +0000 UTC" firstStartedPulling="2026-01-31 04:29:07.276150405 +0000 UTC m=+2470.792485724" lastFinishedPulling="2026-01-31 04:29:07.724943001 +0000 UTC m=+2471.241278310" observedRunningTime="2026-01-31 04:29:08.194635558 +0000 UTC m=+2471.710970857" watchObservedRunningTime="2026-01-31 04:29:08.206348546 +0000 UTC m=+2471.722683855" Jan 31 04:29:16 crc kubenswrapper[4667]: I0131 04:29:16.282233 4667 scope.go:117] "RemoveContainer" containerID="c179b2f38e008b6a7310f1984b183ee74fb222f8cc8019eb62046e0a9a89867f" Jan 31 04:29:16 crc kubenswrapper[4667]: E0131 04:29:16.283551 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:29:25 crc kubenswrapper[4667]: I0131 04:29:25.724924 4667 scope.go:117] "RemoveContainer" containerID="ac0233e49177bd835669f1c724c02192762f22b81440dced8f895365a43528b5" Jan 31 04:29:25 crc kubenswrapper[4667]: I0131 04:29:25.759743 4667 scope.go:117] "RemoveContainer" containerID="83b6c019c23f8169416fba886494a66286e5ab62197a9c0c9a9ddbf6f16414df" Jan 31 04:29:25 crc kubenswrapper[4667]: I0131 04:29:25.823452 4667 scope.go:117] "RemoveContainer" containerID="9e646935a02c0343613ea9367e675f4e001259de60e340721dab504d3c4e51d0" Jan 31 04:29:31 crc kubenswrapper[4667]: I0131 04:29:31.284929 4667 scope.go:117] "RemoveContainer" containerID="c179b2f38e008b6a7310f1984b183ee74fb222f8cc8019eb62046e0a9a89867f" Jan 31 04:29:31 crc kubenswrapper[4667]: E0131 04:29:31.286048 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:29:44 crc kubenswrapper[4667]: I0131 04:29:44.281828 4667 scope.go:117] "RemoveContainer" containerID="c179b2f38e008b6a7310f1984b183ee74fb222f8cc8019eb62046e0a9a89867f" Jan 31 04:29:44 crc kubenswrapper[4667]: E0131 04:29:44.283094 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:29:58 crc kubenswrapper[4667]: I0131 04:29:58.282313 4667 scope.go:117] "RemoveContainer" containerID="c179b2f38e008b6a7310f1984b183ee74fb222f8cc8019eb62046e0a9a89867f" Jan 31 04:29:58 crc kubenswrapper[4667]: E0131 04:29:58.283293 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:30:00 crc kubenswrapper[4667]: I0131 04:30:00.153154 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497230-c4tz7"] Jan 31 04:30:00 crc kubenswrapper[4667]: I0131 04:30:00.156706 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-c4tz7" Jan 31 04:30:00 crc kubenswrapper[4667]: I0131 04:30:00.160504 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 04:30:00 crc kubenswrapper[4667]: I0131 04:30:00.166908 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497230-c4tz7"] Jan 31 04:30:00 crc kubenswrapper[4667]: I0131 04:30:00.167138 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 04:30:00 crc kubenswrapper[4667]: I0131 04:30:00.247957 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdp78\" (UniqueName: \"kubernetes.io/projected/bb801819-1be3-484c-a16e-d06fbd5f6c22-kube-api-access-qdp78\") pod \"collect-profiles-29497230-c4tz7\" (UID: \"bb801819-1be3-484c-a16e-d06fbd5f6c22\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-c4tz7" Jan 31 04:30:00 crc kubenswrapper[4667]: I0131 04:30:00.248094 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bb801819-1be3-484c-a16e-d06fbd5f6c22-config-volume\") pod \"collect-profiles-29497230-c4tz7\" (UID: \"bb801819-1be3-484c-a16e-d06fbd5f6c22\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-c4tz7" Jan 31 04:30:00 crc kubenswrapper[4667]: I0131 04:30:00.248193 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bb801819-1be3-484c-a16e-d06fbd5f6c22-secret-volume\") pod \"collect-profiles-29497230-c4tz7\" (UID: \"bb801819-1be3-484c-a16e-d06fbd5f6c22\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-c4tz7" Jan 31 04:30:00 crc kubenswrapper[4667]: I0131 04:30:00.351581 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bb801819-1be3-484c-a16e-d06fbd5f6c22-config-volume\") pod \"collect-profiles-29497230-c4tz7\" (UID: \"bb801819-1be3-484c-a16e-d06fbd5f6c22\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-c4tz7" Jan 31 04:30:00 crc kubenswrapper[4667]: I0131 04:30:00.351858 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bb801819-1be3-484c-a16e-d06fbd5f6c22-secret-volume\") pod \"collect-profiles-29497230-c4tz7\" (UID: \"bb801819-1be3-484c-a16e-d06fbd5f6c22\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-c4tz7" Jan 31 04:30:00 crc kubenswrapper[4667]: I0131 04:30:00.352072 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdp78\" (UniqueName: \"kubernetes.io/projected/bb801819-1be3-484c-a16e-d06fbd5f6c22-kube-api-access-qdp78\") pod \"collect-profiles-29497230-c4tz7\" (UID: \"bb801819-1be3-484c-a16e-d06fbd5f6c22\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-c4tz7" Jan 31 04:30:00 crc kubenswrapper[4667]: I0131 04:30:00.354980 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bb801819-1be3-484c-a16e-d06fbd5f6c22-config-volume\") pod \"collect-profiles-29497230-c4tz7\" (UID: \"bb801819-1be3-484c-a16e-d06fbd5f6c22\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-c4tz7" Jan 31 04:30:00 crc kubenswrapper[4667]: I0131 04:30:00.366201 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bb801819-1be3-484c-a16e-d06fbd5f6c22-secret-volume\") pod \"collect-profiles-29497230-c4tz7\" (UID: \"bb801819-1be3-484c-a16e-d06fbd5f6c22\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-c4tz7" Jan 31 04:30:00 crc kubenswrapper[4667]: I0131 04:30:00.379762 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdp78\" (UniqueName: \"kubernetes.io/projected/bb801819-1be3-484c-a16e-d06fbd5f6c22-kube-api-access-qdp78\") pod \"collect-profiles-29497230-c4tz7\" (UID: \"bb801819-1be3-484c-a16e-d06fbd5f6c22\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-c4tz7" Jan 31 04:30:00 crc kubenswrapper[4667]: I0131 04:30:00.478430 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-c4tz7" Jan 31 04:30:00 crc kubenswrapper[4667]: I0131 04:30:00.994240 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497230-c4tz7"] Jan 31 04:30:01 crc kubenswrapper[4667]: I0131 04:30:01.794546 4667 generic.go:334] "Generic (PLEG): container finished" podID="bb801819-1be3-484c-a16e-d06fbd5f6c22" containerID="8d03b1c048de32e51c76877e3a6e33288709f151067239608584b666c7c0786b" exitCode=0 Jan 31 04:30:01 crc kubenswrapper[4667]: I0131 04:30:01.795130 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-c4tz7" event={"ID":"bb801819-1be3-484c-a16e-d06fbd5f6c22","Type":"ContainerDied","Data":"8d03b1c048de32e51c76877e3a6e33288709f151067239608584b666c7c0786b"} Jan 31 04:30:01 crc kubenswrapper[4667]: I0131 04:30:01.796067 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-c4tz7" event={"ID":"bb801819-1be3-484c-a16e-d06fbd5f6c22","Type":"ContainerStarted","Data":"6bcb7f28a839422ac325a1aebe2583b3b24b2318d9aca5c153115c3014696cf8"} Jan 31 04:30:03 crc kubenswrapper[4667]: I0131 04:30:03.177983 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-c4tz7" Jan 31 04:30:03 crc kubenswrapper[4667]: I0131 04:30:03.316660 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bb801819-1be3-484c-a16e-d06fbd5f6c22-secret-volume\") pod \"bb801819-1be3-484c-a16e-d06fbd5f6c22\" (UID: \"bb801819-1be3-484c-a16e-d06fbd5f6c22\") " Jan 31 04:30:03 crc kubenswrapper[4667]: I0131 04:30:03.316810 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdp78\" (UniqueName: \"kubernetes.io/projected/bb801819-1be3-484c-a16e-d06fbd5f6c22-kube-api-access-qdp78\") pod \"bb801819-1be3-484c-a16e-d06fbd5f6c22\" (UID: \"bb801819-1be3-484c-a16e-d06fbd5f6c22\") " Jan 31 04:30:03 crc kubenswrapper[4667]: I0131 04:30:03.316919 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bb801819-1be3-484c-a16e-d06fbd5f6c22-config-volume\") pod \"bb801819-1be3-484c-a16e-d06fbd5f6c22\" (UID: \"bb801819-1be3-484c-a16e-d06fbd5f6c22\") " Jan 31 04:30:03 crc kubenswrapper[4667]: I0131 04:30:03.317886 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb801819-1be3-484c-a16e-d06fbd5f6c22-config-volume" (OuterVolumeSpecName: "config-volume") pod "bb801819-1be3-484c-a16e-d06fbd5f6c22" (UID: "bb801819-1be3-484c-a16e-d06fbd5f6c22"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:30:03 crc kubenswrapper[4667]: I0131 04:30:03.322755 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb801819-1be3-484c-a16e-d06fbd5f6c22-kube-api-access-qdp78" (OuterVolumeSpecName: "kube-api-access-qdp78") pod "bb801819-1be3-484c-a16e-d06fbd5f6c22" (UID: "bb801819-1be3-484c-a16e-d06fbd5f6c22"). InnerVolumeSpecName "kube-api-access-qdp78". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:30:03 crc kubenswrapper[4667]: I0131 04:30:03.337684 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb801819-1be3-484c-a16e-d06fbd5f6c22-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bb801819-1be3-484c-a16e-d06fbd5f6c22" (UID: "bb801819-1be3-484c-a16e-d06fbd5f6c22"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:30:03 crc kubenswrapper[4667]: I0131 04:30:03.418769 4667 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bb801819-1be3-484c-a16e-d06fbd5f6c22-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 04:30:03 crc kubenswrapper[4667]: I0131 04:30:03.418820 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdp78\" (UniqueName: \"kubernetes.io/projected/bb801819-1be3-484c-a16e-d06fbd5f6c22-kube-api-access-qdp78\") on node \"crc\" DevicePath \"\"" Jan 31 04:30:03 crc kubenswrapper[4667]: I0131 04:30:03.418832 4667 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bb801819-1be3-484c-a16e-d06fbd5f6c22-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 04:30:03 crc kubenswrapper[4667]: I0131 04:30:03.820504 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-c4tz7" event={"ID":"bb801819-1be3-484c-a16e-d06fbd5f6c22","Type":"ContainerDied","Data":"6bcb7f28a839422ac325a1aebe2583b3b24b2318d9aca5c153115c3014696cf8"} Jan 31 04:30:03 crc kubenswrapper[4667]: I0131 04:30:03.820588 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bcb7f28a839422ac325a1aebe2583b3b24b2318d9aca5c153115c3014696cf8" Jan 31 04:30:03 crc kubenswrapper[4667]: I0131 04:30:03.821187 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497230-c4tz7" Jan 31 04:30:04 crc kubenswrapper[4667]: I0131 04:30:04.278528 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497185-9b5gp"] Jan 31 04:30:04 crc kubenswrapper[4667]: I0131 04:30:04.286619 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497185-9b5gp"] Jan 31 04:30:05 crc kubenswrapper[4667]: I0131 04:30:05.300364 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="145e5e24-2f94-48b2-be05-b08dbbb09312" path="/var/lib/kubelet/pods/145e5e24-2f94-48b2-be05-b08dbbb09312/volumes" Jan 31 04:30:13 crc kubenswrapper[4667]: I0131 04:30:13.282959 4667 scope.go:117] "RemoveContainer" containerID="c179b2f38e008b6a7310f1984b183ee74fb222f8cc8019eb62046e0a9a89867f" Jan 31 04:30:13 crc kubenswrapper[4667]: E0131 04:30:13.283728 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:30:25 crc kubenswrapper[4667]: I0131 04:30:25.910470 4667 scope.go:117] "RemoveContainer" containerID="bd4582241bfb08235ad49f9224238c8abad1554ad5555edf41cc3df1d03882a8" Jan 31 04:30:26 crc kubenswrapper[4667]: I0131 04:30:26.282093 4667 scope.go:117] "RemoveContainer" containerID="c179b2f38e008b6a7310f1984b183ee74fb222f8cc8019eb62046e0a9a89867f" Jan 31 04:30:26 crc kubenswrapper[4667]: E0131 04:30:26.282536 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:30:39 crc kubenswrapper[4667]: I0131 04:30:39.282428 4667 scope.go:117] "RemoveContainer" containerID="c179b2f38e008b6a7310f1984b183ee74fb222f8cc8019eb62046e0a9a89867f" Jan 31 04:30:39 crc kubenswrapper[4667]: E0131 04:30:39.285074 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:30:54 crc kubenswrapper[4667]: I0131 04:30:54.282528 4667 scope.go:117] "RemoveContainer" containerID="c179b2f38e008b6a7310f1984b183ee74fb222f8cc8019eb62046e0a9a89867f" Jan 31 04:30:54 crc kubenswrapper[4667]: E0131 04:30:54.283467 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:31:08 crc kubenswrapper[4667]: I0131 04:31:08.282535 4667 scope.go:117] "RemoveContainer" containerID="c179b2f38e008b6a7310f1984b183ee74fb222f8cc8019eb62046e0a9a89867f" Jan 31 04:31:08 crc kubenswrapper[4667]: E0131 04:31:08.283455 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:31:23 crc kubenswrapper[4667]: I0131 04:31:23.282406 4667 scope.go:117] "RemoveContainer" containerID="c179b2f38e008b6a7310f1984b183ee74fb222f8cc8019eb62046e0a9a89867f" Jan 31 04:31:23 crc kubenswrapper[4667]: E0131 04:31:23.284010 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:31:38 crc kubenswrapper[4667]: I0131 04:31:38.282907 4667 scope.go:117] "RemoveContainer" containerID="c179b2f38e008b6a7310f1984b183ee74fb222f8cc8019eb62046e0a9a89867f" Jan 31 04:31:38 crc kubenswrapper[4667]: E0131 04:31:38.284107 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:31:38 crc kubenswrapper[4667]: I0131 04:31:38.935906 4667 generic.go:334] "Generic (PLEG): container finished" podID="8d2d3410-e5e4-4607-ab3c-74199d66293d" containerID="6694105a5596cb5206ee1f9d6a848ab3acfa4e6eff1258bbe55eeb0293990cfa" exitCode=0 Jan 31 04:31:38 crc kubenswrapper[4667]: I0131 04:31:38.935998 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7swhs" event={"ID":"8d2d3410-e5e4-4607-ab3c-74199d66293d","Type":"ContainerDied","Data":"6694105a5596cb5206ee1f9d6a848ab3acfa4e6eff1258bbe55eeb0293990cfa"} Jan 31 04:31:40 crc kubenswrapper[4667]: I0131 04:31:40.434897 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7swhs" Jan 31 04:31:40 crc kubenswrapper[4667]: I0131 04:31:40.525762 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8d2d3410-e5e4-4607-ab3c-74199d66293d-nova-extra-config-0\") pod \"8d2d3410-e5e4-4607-ab3c-74199d66293d\" (UID: \"8d2d3410-e5e4-4607-ab3c-74199d66293d\") " Jan 31 04:31:40 crc kubenswrapper[4667]: I0131 04:31:40.526418 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8d2d3410-e5e4-4607-ab3c-74199d66293d-nova-cell1-compute-config-0\") pod \"8d2d3410-e5e4-4607-ab3c-74199d66293d\" (UID: \"8d2d3410-e5e4-4607-ab3c-74199d66293d\") " Jan 31 04:31:40 crc kubenswrapper[4667]: I0131 04:31:40.526598 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8d2d3410-e5e4-4607-ab3c-74199d66293d-nova-cell1-compute-config-1\") pod \"8d2d3410-e5e4-4607-ab3c-74199d66293d\" (UID: \"8d2d3410-e5e4-4607-ab3c-74199d66293d\") " Jan 31 04:31:40 crc kubenswrapper[4667]: I0131 04:31:40.526739 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8d2d3410-e5e4-4607-ab3c-74199d66293d-inventory\") pod \"8d2d3410-e5e4-4607-ab3c-74199d66293d\" (UID: \"8d2d3410-e5e4-4607-ab3c-74199d66293d\") " Jan 31 04:31:40 crc kubenswrapper[4667]: I0131 04:31:40.526886 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8d2d3410-e5e4-4607-ab3c-74199d66293d-nova-migration-ssh-key-1\") pod \"8d2d3410-e5e4-4607-ab3c-74199d66293d\" (UID: \"8d2d3410-e5e4-4607-ab3c-74199d66293d\") " Jan 31 04:31:40 crc kubenswrapper[4667]: I0131 04:31:40.527004 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7fcd\" (UniqueName: \"kubernetes.io/projected/8d2d3410-e5e4-4607-ab3c-74199d66293d-kube-api-access-g7fcd\") pod \"8d2d3410-e5e4-4607-ab3c-74199d66293d\" (UID: \"8d2d3410-e5e4-4607-ab3c-74199d66293d\") " Jan 31 04:31:40 crc kubenswrapper[4667]: I0131 04:31:40.527160 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8d2d3410-e5e4-4607-ab3c-74199d66293d-nova-migration-ssh-key-0\") pod \"8d2d3410-e5e4-4607-ab3c-74199d66293d\" (UID: \"8d2d3410-e5e4-4607-ab3c-74199d66293d\") " Jan 31 04:31:40 crc kubenswrapper[4667]: I0131 04:31:40.527273 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d2d3410-e5e4-4607-ab3c-74199d66293d-nova-combined-ca-bundle\") pod \"8d2d3410-e5e4-4607-ab3c-74199d66293d\" (UID: \"8d2d3410-e5e4-4607-ab3c-74199d66293d\") " Jan 31 04:31:40 crc kubenswrapper[4667]: I0131 04:31:40.527398 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8d2d3410-e5e4-4607-ab3c-74199d66293d-ssh-key-openstack-edpm-ipam\") pod \"8d2d3410-e5e4-4607-ab3c-74199d66293d\" (UID: \"8d2d3410-e5e4-4607-ab3c-74199d66293d\") " Jan 31 04:31:40 crc kubenswrapper[4667]: I0131 04:31:40.540624 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d2d3410-e5e4-4607-ab3c-74199d66293d-kube-api-access-g7fcd" (OuterVolumeSpecName: "kube-api-access-g7fcd") pod "8d2d3410-e5e4-4607-ab3c-74199d66293d" (UID: "8d2d3410-e5e4-4607-ab3c-74199d66293d"). InnerVolumeSpecName "kube-api-access-g7fcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:31:40 crc kubenswrapper[4667]: I0131 04:31:40.551097 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d2d3410-e5e4-4607-ab3c-74199d66293d-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "8d2d3410-e5e4-4607-ab3c-74199d66293d" (UID: "8d2d3410-e5e4-4607-ab3c-74199d66293d"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:31:40 crc kubenswrapper[4667]: I0131 04:31:40.556056 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d2d3410-e5e4-4607-ab3c-74199d66293d-inventory" (OuterVolumeSpecName: "inventory") pod "8d2d3410-e5e4-4607-ab3c-74199d66293d" (UID: "8d2d3410-e5e4-4607-ab3c-74199d66293d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:31:40 crc kubenswrapper[4667]: I0131 04:31:40.562200 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d2d3410-e5e4-4607-ab3c-74199d66293d-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "8d2d3410-e5e4-4607-ab3c-74199d66293d" (UID: "8d2d3410-e5e4-4607-ab3c-74199d66293d"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:31:40 crc kubenswrapper[4667]: I0131 04:31:40.567108 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d2d3410-e5e4-4607-ab3c-74199d66293d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8d2d3410-e5e4-4607-ab3c-74199d66293d" (UID: "8d2d3410-e5e4-4607-ab3c-74199d66293d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:31:40 crc kubenswrapper[4667]: I0131 04:31:40.568273 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d2d3410-e5e4-4607-ab3c-74199d66293d-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "8d2d3410-e5e4-4607-ab3c-74199d66293d" (UID: "8d2d3410-e5e4-4607-ab3c-74199d66293d"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:31:40 crc kubenswrapper[4667]: I0131 04:31:40.570688 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d2d3410-e5e4-4607-ab3c-74199d66293d-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "8d2d3410-e5e4-4607-ab3c-74199d66293d" (UID: "8d2d3410-e5e4-4607-ab3c-74199d66293d"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:31:40 crc kubenswrapper[4667]: I0131 04:31:40.578042 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d2d3410-e5e4-4607-ab3c-74199d66293d-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "8d2d3410-e5e4-4607-ab3c-74199d66293d" (UID: "8d2d3410-e5e4-4607-ab3c-74199d66293d"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:31:40 crc kubenswrapper[4667]: I0131 04:31:40.580164 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d2d3410-e5e4-4607-ab3c-74199d66293d-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "8d2d3410-e5e4-4607-ab3c-74199d66293d" (UID: "8d2d3410-e5e4-4607-ab3c-74199d66293d"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:31:40 crc kubenswrapper[4667]: I0131 04:31:40.631033 4667 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8d2d3410-e5e4-4607-ab3c-74199d66293d-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 31 04:31:40 crc kubenswrapper[4667]: I0131 04:31:40.631189 4667 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8d2d3410-e5e4-4607-ab3c-74199d66293d-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 31 04:31:40 crc kubenswrapper[4667]: I0131 04:31:40.631283 4667 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8d2d3410-e5e4-4607-ab3c-74199d66293d-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 04:31:40 crc kubenswrapper[4667]: I0131 04:31:40.631372 4667 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8d2d3410-e5e4-4607-ab3c-74199d66293d-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 31 04:31:40 crc kubenswrapper[4667]: I0131 04:31:40.631453 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7fcd\" (UniqueName: \"kubernetes.io/projected/8d2d3410-e5e4-4607-ab3c-74199d66293d-kube-api-access-g7fcd\") on node \"crc\" DevicePath \"\"" Jan 31 04:31:40 crc kubenswrapper[4667]: I0131 04:31:40.631527 4667 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8d2d3410-e5e4-4607-ab3c-74199d66293d-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 31 04:31:40 crc kubenswrapper[4667]: I0131 04:31:40.631599 4667 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d2d3410-e5e4-4607-ab3c-74199d66293d-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:31:40 crc kubenswrapper[4667]: I0131 04:31:40.631673 4667 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8d2d3410-e5e4-4607-ab3c-74199d66293d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:31:40 crc kubenswrapper[4667]: I0131 04:31:40.631748 4667 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8d2d3410-e5e4-4607-ab3c-74199d66293d-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 31 04:31:40 crc kubenswrapper[4667]: I0131 04:31:40.961969 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7swhs" event={"ID":"8d2d3410-e5e4-4607-ab3c-74199d66293d","Type":"ContainerDied","Data":"60a4dcb5cc69dc87befe2d7a6cde8e312398267fca3ecf1326464660da7eafea"} Jan 31 04:31:40 crc kubenswrapper[4667]: I0131 04:31:40.962036 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60a4dcb5cc69dc87befe2d7a6cde8e312398267fca3ecf1326464660da7eafea" Jan 31 04:31:40 crc kubenswrapper[4667]: I0131 04:31:40.962075 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-7swhs" Jan 31 04:31:41 crc kubenswrapper[4667]: I0131 04:31:41.147903 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c"] Jan 31 04:31:41 crc kubenswrapper[4667]: E0131 04:31:41.149120 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d2d3410-e5e4-4607-ab3c-74199d66293d" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 31 04:31:41 crc kubenswrapper[4667]: I0131 04:31:41.149160 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d2d3410-e5e4-4607-ab3c-74199d66293d" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 31 04:31:41 crc kubenswrapper[4667]: E0131 04:31:41.149191 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb801819-1be3-484c-a16e-d06fbd5f6c22" containerName="collect-profiles" Jan 31 04:31:41 crc kubenswrapper[4667]: I0131 04:31:41.149205 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb801819-1be3-484c-a16e-d06fbd5f6c22" containerName="collect-profiles" Jan 31 04:31:41 crc kubenswrapper[4667]: I0131 04:31:41.150231 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb801819-1be3-484c-a16e-d06fbd5f6c22" containerName="collect-profiles" Jan 31 04:31:41 crc kubenswrapper[4667]: I0131 04:31:41.150288 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d2d3410-e5e4-4607-ab3c-74199d66293d" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 31 04:31:41 crc kubenswrapper[4667]: I0131 04:31:41.151435 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c" Jan 31 04:31:41 crc kubenswrapper[4667]: I0131 04:31:41.155939 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 31 04:31:41 crc kubenswrapper[4667]: I0131 04:31:41.157216 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 31 04:31:41 crc kubenswrapper[4667]: I0131 04:31:41.157569 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z7p2q" Jan 31 04:31:41 crc kubenswrapper[4667]: I0131 04:31:41.159937 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 31 04:31:41 crc kubenswrapper[4667]: I0131 04:31:41.169122 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 31 04:31:41 crc kubenswrapper[4667]: I0131 04:31:41.172956 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c"] Jan 31 04:31:41 crc kubenswrapper[4667]: I0131 04:31:41.248264 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c2249d9c-021c-4dbf-8770-767be19d9404-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c\" (UID: \"c2249d9c-021c-4dbf-8770-767be19d9404\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c" Jan 31 04:31:41 crc kubenswrapper[4667]: I0131 04:31:41.248677 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l82zj\" (UniqueName: \"kubernetes.io/projected/c2249d9c-021c-4dbf-8770-767be19d9404-kube-api-access-l82zj\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c\" (UID: \"c2249d9c-021c-4dbf-8770-767be19d9404\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c" Jan 31 04:31:41 crc kubenswrapper[4667]: I0131 04:31:41.248742 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2249d9c-021c-4dbf-8770-767be19d9404-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c\" (UID: \"c2249d9c-021c-4dbf-8770-767be19d9404\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c" Jan 31 04:31:41 crc kubenswrapper[4667]: I0131 04:31:41.248960 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/c2249d9c-021c-4dbf-8770-767be19d9404-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c\" (UID: \"c2249d9c-021c-4dbf-8770-767be19d9404\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c" Jan 31 04:31:41 crc kubenswrapper[4667]: I0131 04:31:41.249306 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c2249d9c-021c-4dbf-8770-767be19d9404-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c\" (UID: \"c2249d9c-021c-4dbf-8770-767be19d9404\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c" Jan 31 04:31:41 crc kubenswrapper[4667]: I0131 04:31:41.255734 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c2249d9c-021c-4dbf-8770-767be19d9404-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c\" (UID: \"c2249d9c-021c-4dbf-8770-767be19d9404\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c" Jan 31 04:31:41 crc kubenswrapper[4667]: I0131 04:31:41.255929 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c2249d9c-021c-4dbf-8770-767be19d9404-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c\" (UID: \"c2249d9c-021c-4dbf-8770-767be19d9404\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c" Jan 31 04:31:41 crc kubenswrapper[4667]: I0131 04:31:41.358729 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l82zj\" (UniqueName: \"kubernetes.io/projected/c2249d9c-021c-4dbf-8770-767be19d9404-kube-api-access-l82zj\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c\" (UID: \"c2249d9c-021c-4dbf-8770-767be19d9404\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c" Jan 31 04:31:41 crc kubenswrapper[4667]: I0131 04:31:41.358801 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2249d9c-021c-4dbf-8770-767be19d9404-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c\" (UID: \"c2249d9c-021c-4dbf-8770-767be19d9404\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c" Jan 31 04:31:41 crc kubenswrapper[4667]: I0131 04:31:41.358863 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/c2249d9c-021c-4dbf-8770-767be19d9404-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c\" (UID: \"c2249d9c-021c-4dbf-8770-767be19d9404\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c" Jan 31 04:31:41 crc kubenswrapper[4667]: I0131 04:31:41.358917 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c2249d9c-021c-4dbf-8770-767be19d9404-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c\" (UID: \"c2249d9c-021c-4dbf-8770-767be19d9404\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c" Jan 31 04:31:41 crc kubenswrapper[4667]: I0131 04:31:41.358962 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c2249d9c-021c-4dbf-8770-767be19d9404-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c\" (UID: \"c2249d9c-021c-4dbf-8770-767be19d9404\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c" Jan 31 04:31:41 crc kubenswrapper[4667]: I0131 04:31:41.359018 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c2249d9c-021c-4dbf-8770-767be19d9404-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c\" (UID: \"c2249d9c-021c-4dbf-8770-767be19d9404\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c" Jan 31 04:31:41 crc kubenswrapper[4667]: I0131 04:31:41.359057 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c2249d9c-021c-4dbf-8770-767be19d9404-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c\" (UID: \"c2249d9c-021c-4dbf-8770-767be19d9404\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c" Jan 31 04:31:41 crc kubenswrapper[4667]: I0131 04:31:41.364453 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c2249d9c-021c-4dbf-8770-767be19d9404-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c\" (UID: \"c2249d9c-021c-4dbf-8770-767be19d9404\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c" Jan 31 04:31:41 crc kubenswrapper[4667]: I0131 04:31:41.364480 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/c2249d9c-021c-4dbf-8770-767be19d9404-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c\" (UID: \"c2249d9c-021c-4dbf-8770-767be19d9404\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c" Jan 31 04:31:41 crc kubenswrapper[4667]: I0131 04:31:41.366013 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c2249d9c-021c-4dbf-8770-767be19d9404-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c\" (UID: \"c2249d9c-021c-4dbf-8770-767be19d9404\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c" Jan 31 04:31:41 crc kubenswrapper[4667]: I0131 04:31:41.366445 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2249d9c-021c-4dbf-8770-767be19d9404-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c\" (UID: \"c2249d9c-021c-4dbf-8770-767be19d9404\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c" Jan 31 04:31:41 crc kubenswrapper[4667]: I0131 04:31:41.366562 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c2249d9c-021c-4dbf-8770-767be19d9404-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c\" (UID: \"c2249d9c-021c-4dbf-8770-767be19d9404\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c" Jan 31 04:31:41 crc kubenswrapper[4667]: I0131 04:31:41.369568 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c2249d9c-021c-4dbf-8770-767be19d9404-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c\" (UID: \"c2249d9c-021c-4dbf-8770-767be19d9404\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c" Jan 31 04:31:41 crc kubenswrapper[4667]: I0131 04:31:41.389318 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l82zj\" (UniqueName: \"kubernetes.io/projected/c2249d9c-021c-4dbf-8770-767be19d9404-kube-api-access-l82zj\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c\" (UID: \"c2249d9c-021c-4dbf-8770-767be19d9404\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c" Jan 31 04:31:41 crc kubenswrapper[4667]: I0131 04:31:41.478330 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c" Jan 31 04:31:41 crc kubenswrapper[4667]: I0131 04:31:41.899668 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c"] Jan 31 04:31:41 crc kubenswrapper[4667]: I0131 04:31:41.975935 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c" event={"ID":"c2249d9c-021c-4dbf-8770-767be19d9404","Type":"ContainerStarted","Data":"2a3c2a2975f659b7b8e67aeaf5c71a7ebda7eb5df0f6f9ea479090ab8c840ef3"} Jan 31 04:31:42 crc kubenswrapper[4667]: I0131 04:31:42.991630 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c" event={"ID":"c2249d9c-021c-4dbf-8770-767be19d9404","Type":"ContainerStarted","Data":"b22b72723ec497d945d609064d21113b1a171ab9e3dc8ef7726f6b4b79a81572"} Jan 31 04:31:43 crc kubenswrapper[4667]: I0131 04:31:43.025963 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c" podStartSLOduration=1.4778826409999999 podStartE2EDuration="2.025938502s" podCreationTimestamp="2026-01-31 04:31:41 +0000 UTC" firstStartedPulling="2026-01-31 04:31:41.910994713 +0000 UTC m=+2625.427330012" lastFinishedPulling="2026-01-31 04:31:42.459050534 +0000 UTC m=+2625.975385873" observedRunningTime="2026-01-31 04:31:43.016414261 +0000 UTC m=+2626.532749560" watchObservedRunningTime="2026-01-31 04:31:43.025938502 +0000 UTC m=+2626.542273821" Jan 31 04:31:52 crc kubenswrapper[4667]: I0131 04:31:52.282824 4667 scope.go:117] "RemoveContainer" containerID="c179b2f38e008b6a7310f1984b183ee74fb222f8cc8019eb62046e0a9a89867f" Jan 31 04:31:52 crc kubenswrapper[4667]: E0131 04:31:52.284044 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:32:03 crc kubenswrapper[4667]: I0131 04:32:03.286555 4667 scope.go:117] "RemoveContainer" containerID="c179b2f38e008b6a7310f1984b183ee74fb222f8cc8019eb62046e0a9a89867f" Jan 31 04:32:03 crc kubenswrapper[4667]: E0131 04:32:03.287833 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:32:14 crc kubenswrapper[4667]: I0131 04:32:14.282252 4667 scope.go:117] "RemoveContainer" containerID="c179b2f38e008b6a7310f1984b183ee74fb222f8cc8019eb62046e0a9a89867f" Jan 31 04:32:14 crc kubenswrapper[4667]: E0131 04:32:14.283451 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:32:27 crc kubenswrapper[4667]: I0131 04:32:27.288960 4667 scope.go:117] "RemoveContainer" containerID="c179b2f38e008b6a7310f1984b183ee74fb222f8cc8019eb62046e0a9a89867f" Jan 31 04:32:27 crc kubenswrapper[4667]: E0131 04:32:27.290216 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:32:40 crc kubenswrapper[4667]: I0131 04:32:40.282595 4667 scope.go:117] "RemoveContainer" containerID="c179b2f38e008b6a7310f1984b183ee74fb222f8cc8019eb62046e0a9a89867f" Jan 31 04:32:40 crc kubenswrapper[4667]: E0131 04:32:40.283809 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:32:55 crc kubenswrapper[4667]: I0131 04:32:55.282735 4667 scope.go:117] "RemoveContainer" containerID="c179b2f38e008b6a7310f1984b183ee74fb222f8cc8019eb62046e0a9a89867f" Jan 31 04:32:55 crc kubenswrapper[4667]: E0131 04:32:55.284050 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:33:09 crc kubenswrapper[4667]: I0131 04:33:09.282693 4667 scope.go:117] "RemoveContainer" containerID="c179b2f38e008b6a7310f1984b183ee74fb222f8cc8019eb62046e0a9a89867f" Jan 31 04:33:09 crc kubenswrapper[4667]: E0131 04:33:09.284269 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:33:20 crc kubenswrapper[4667]: I0131 04:33:20.282957 4667 scope.go:117] "RemoveContainer" containerID="c179b2f38e008b6a7310f1984b183ee74fb222f8cc8019eb62046e0a9a89867f" Jan 31 04:33:21 crc kubenswrapper[4667]: I0131 04:33:21.510091 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" event={"ID":"b103bbd2-fb5d-4b2a-8b01-c32f699757df","Type":"ContainerStarted","Data":"419bf0d7a3fec41291c94c017831c9dce37119879fd8ba11b94215270b777f4e"} Jan 31 04:33:40 crc kubenswrapper[4667]: I0131 04:33:40.698080 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jvtzb"] Jan 31 04:33:40 crc kubenswrapper[4667]: I0131 04:33:40.702504 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jvtzb" Jan 31 04:33:40 crc kubenswrapper[4667]: I0131 04:33:40.729200 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jvtzb"] Jan 31 04:33:40 crc kubenswrapper[4667]: I0131 04:33:40.891275 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76e844a6-1298-42aa-87c2-2312d9b74d14-catalog-content\") pod \"redhat-operators-jvtzb\" (UID: \"76e844a6-1298-42aa-87c2-2312d9b74d14\") " pod="openshift-marketplace/redhat-operators-jvtzb" Jan 31 04:33:40 crc kubenswrapper[4667]: I0131 04:33:40.891366 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76e844a6-1298-42aa-87c2-2312d9b74d14-utilities\") pod \"redhat-operators-jvtzb\" (UID: \"76e844a6-1298-42aa-87c2-2312d9b74d14\") " pod="openshift-marketplace/redhat-operators-jvtzb" Jan 31 04:33:40 crc kubenswrapper[4667]: I0131 04:33:40.891485 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29kt4\" (UniqueName: \"kubernetes.io/projected/76e844a6-1298-42aa-87c2-2312d9b74d14-kube-api-access-29kt4\") pod \"redhat-operators-jvtzb\" (UID: \"76e844a6-1298-42aa-87c2-2312d9b74d14\") " pod="openshift-marketplace/redhat-operators-jvtzb" Jan 31 04:33:40 crc kubenswrapper[4667]: I0131 04:33:40.993738 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76e844a6-1298-42aa-87c2-2312d9b74d14-utilities\") pod \"redhat-operators-jvtzb\" (UID: \"76e844a6-1298-42aa-87c2-2312d9b74d14\") " pod="openshift-marketplace/redhat-operators-jvtzb" Jan 31 04:33:40 crc kubenswrapper[4667]: I0131 04:33:40.994001 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29kt4\" (UniqueName: \"kubernetes.io/projected/76e844a6-1298-42aa-87c2-2312d9b74d14-kube-api-access-29kt4\") pod \"redhat-operators-jvtzb\" (UID: \"76e844a6-1298-42aa-87c2-2312d9b74d14\") " pod="openshift-marketplace/redhat-operators-jvtzb" Jan 31 04:33:40 crc kubenswrapper[4667]: I0131 04:33:40.994090 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76e844a6-1298-42aa-87c2-2312d9b74d14-catalog-content\") pod \"redhat-operators-jvtzb\" (UID: \"76e844a6-1298-42aa-87c2-2312d9b74d14\") " pod="openshift-marketplace/redhat-operators-jvtzb" Jan 31 04:33:40 crc kubenswrapper[4667]: I0131 04:33:40.994389 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76e844a6-1298-42aa-87c2-2312d9b74d14-utilities\") pod \"redhat-operators-jvtzb\" (UID: \"76e844a6-1298-42aa-87c2-2312d9b74d14\") " pod="openshift-marketplace/redhat-operators-jvtzb" Jan 31 04:33:40 crc kubenswrapper[4667]: I0131 04:33:40.994682 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76e844a6-1298-42aa-87c2-2312d9b74d14-catalog-content\") pod \"redhat-operators-jvtzb\" (UID: \"76e844a6-1298-42aa-87c2-2312d9b74d14\") " pod="openshift-marketplace/redhat-operators-jvtzb" Jan 31 04:33:41 crc kubenswrapper[4667]: I0131 04:33:41.019743 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29kt4\" (UniqueName: \"kubernetes.io/projected/76e844a6-1298-42aa-87c2-2312d9b74d14-kube-api-access-29kt4\") pod \"redhat-operators-jvtzb\" (UID: \"76e844a6-1298-42aa-87c2-2312d9b74d14\") " pod="openshift-marketplace/redhat-operators-jvtzb" Jan 31 04:33:41 crc kubenswrapper[4667]: I0131 04:33:41.039579 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jvtzb" Jan 31 04:33:41 crc kubenswrapper[4667]: I0131 04:33:41.612348 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jvtzb"] Jan 31 04:33:41 crc kubenswrapper[4667]: I0131 04:33:41.760158 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvtzb" event={"ID":"76e844a6-1298-42aa-87c2-2312d9b74d14","Type":"ContainerStarted","Data":"93b27f41b5700e0363becd998d43ad8b0df6be2e8b031b0e2177ab217c30514e"} Jan 31 04:33:42 crc kubenswrapper[4667]: I0131 04:33:42.775321 4667 generic.go:334] "Generic (PLEG): container finished" podID="76e844a6-1298-42aa-87c2-2312d9b74d14" containerID="eb6db36192ef55b2ec88a349388a4d5ded5ffec17457172564348e27a5ab6487" exitCode=0 Jan 31 04:33:42 crc kubenswrapper[4667]: I0131 04:33:42.775395 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvtzb" event={"ID":"76e844a6-1298-42aa-87c2-2312d9b74d14","Type":"ContainerDied","Data":"eb6db36192ef55b2ec88a349388a4d5ded5ffec17457172564348e27a5ab6487"} Jan 31 04:33:44 crc kubenswrapper[4667]: I0131 04:33:44.800512 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvtzb" event={"ID":"76e844a6-1298-42aa-87c2-2312d9b74d14","Type":"ContainerStarted","Data":"af4c6507e1106194c5a29f1046949bde4ba13e137817e4afbf56b29362727584"} Jan 31 04:33:47 crc kubenswrapper[4667]: I0131 04:33:47.846372 4667 generic.go:334] "Generic (PLEG): container finished" podID="76e844a6-1298-42aa-87c2-2312d9b74d14" containerID="af4c6507e1106194c5a29f1046949bde4ba13e137817e4afbf56b29362727584" exitCode=0 Jan 31 04:33:47 crc kubenswrapper[4667]: I0131 04:33:47.846484 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvtzb" event={"ID":"76e844a6-1298-42aa-87c2-2312d9b74d14","Type":"ContainerDied","Data":"af4c6507e1106194c5a29f1046949bde4ba13e137817e4afbf56b29362727584"} Jan 31 04:33:48 crc kubenswrapper[4667]: I0131 04:33:48.863331 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvtzb" event={"ID":"76e844a6-1298-42aa-87c2-2312d9b74d14","Type":"ContainerStarted","Data":"1a82f59d7f5a6b9c54d2110c3e1df017a0d086b9bf11dab205d02c6ddceea25e"} Jan 31 04:33:48 crc kubenswrapper[4667]: I0131 04:33:48.885794 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jvtzb" podStartSLOduration=3.4235135899999998 podStartE2EDuration="8.885775162s" podCreationTimestamp="2026-01-31 04:33:40 +0000 UTC" firstStartedPulling="2026-01-31 04:33:42.780158147 +0000 UTC m=+2746.296493456" lastFinishedPulling="2026-01-31 04:33:48.242419699 +0000 UTC m=+2751.758755028" observedRunningTime="2026-01-31 04:33:48.883391499 +0000 UTC m=+2752.399726798" watchObservedRunningTime="2026-01-31 04:33:48.885775162 +0000 UTC m=+2752.402110461" Jan 31 04:33:51 crc kubenswrapper[4667]: I0131 04:33:51.040633 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jvtzb" Jan 31 04:33:51 crc kubenswrapper[4667]: I0131 04:33:51.041075 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jvtzb" Jan 31 04:33:52 crc kubenswrapper[4667]: I0131 04:33:52.090529 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jvtzb" podUID="76e844a6-1298-42aa-87c2-2312d9b74d14" containerName="registry-server" probeResult="failure" output=< Jan 31 04:33:52 crc kubenswrapper[4667]: timeout: failed to connect service ":50051" within 1s Jan 31 04:33:52 crc kubenswrapper[4667]: > Jan 31 04:34:01 crc kubenswrapper[4667]: I0131 04:34:01.116269 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jvtzb" Jan 31 04:34:01 crc kubenswrapper[4667]: I0131 04:34:01.174186 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jvtzb" Jan 31 04:34:01 crc kubenswrapper[4667]: I0131 04:34:01.356375 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jvtzb"] Jan 31 04:34:03 crc kubenswrapper[4667]: I0131 04:34:03.034769 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jvtzb" podUID="76e844a6-1298-42aa-87c2-2312d9b74d14" containerName="registry-server" containerID="cri-o://1a82f59d7f5a6b9c54d2110c3e1df017a0d086b9bf11dab205d02c6ddceea25e" gracePeriod=2 Jan 31 04:34:03 crc kubenswrapper[4667]: I0131 04:34:03.583952 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jvtzb" Jan 31 04:34:03 crc kubenswrapper[4667]: I0131 04:34:03.635473 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76e844a6-1298-42aa-87c2-2312d9b74d14-catalog-content\") pod \"76e844a6-1298-42aa-87c2-2312d9b74d14\" (UID: \"76e844a6-1298-42aa-87c2-2312d9b74d14\") " Jan 31 04:34:03 crc kubenswrapper[4667]: I0131 04:34:03.635518 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76e844a6-1298-42aa-87c2-2312d9b74d14-utilities\") pod \"76e844a6-1298-42aa-87c2-2312d9b74d14\" (UID: \"76e844a6-1298-42aa-87c2-2312d9b74d14\") " Jan 31 04:34:03 crc kubenswrapper[4667]: I0131 04:34:03.635582 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29kt4\" (UniqueName: \"kubernetes.io/projected/76e844a6-1298-42aa-87c2-2312d9b74d14-kube-api-access-29kt4\") pod \"76e844a6-1298-42aa-87c2-2312d9b74d14\" (UID: \"76e844a6-1298-42aa-87c2-2312d9b74d14\") " Jan 31 04:34:03 crc kubenswrapper[4667]: I0131 04:34:03.642249 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76e844a6-1298-42aa-87c2-2312d9b74d14-utilities" (OuterVolumeSpecName: "utilities") pod "76e844a6-1298-42aa-87c2-2312d9b74d14" (UID: "76e844a6-1298-42aa-87c2-2312d9b74d14"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:34:03 crc kubenswrapper[4667]: I0131 04:34:03.653861 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76e844a6-1298-42aa-87c2-2312d9b74d14-kube-api-access-29kt4" (OuterVolumeSpecName: "kube-api-access-29kt4") pod "76e844a6-1298-42aa-87c2-2312d9b74d14" (UID: "76e844a6-1298-42aa-87c2-2312d9b74d14"). InnerVolumeSpecName "kube-api-access-29kt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:34:03 crc kubenswrapper[4667]: I0131 04:34:03.736742 4667 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76e844a6-1298-42aa-87c2-2312d9b74d14-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:34:03 crc kubenswrapper[4667]: I0131 04:34:03.737026 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29kt4\" (UniqueName: \"kubernetes.io/projected/76e844a6-1298-42aa-87c2-2312d9b74d14-kube-api-access-29kt4\") on node \"crc\" DevicePath \"\"" Jan 31 04:34:03 crc kubenswrapper[4667]: I0131 04:34:03.779201 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76e844a6-1298-42aa-87c2-2312d9b74d14-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "76e844a6-1298-42aa-87c2-2312d9b74d14" (UID: "76e844a6-1298-42aa-87c2-2312d9b74d14"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:34:03 crc kubenswrapper[4667]: I0131 04:34:03.838521 4667 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76e844a6-1298-42aa-87c2-2312d9b74d14-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:34:04 crc kubenswrapper[4667]: I0131 04:34:04.050355 4667 generic.go:334] "Generic (PLEG): container finished" podID="76e844a6-1298-42aa-87c2-2312d9b74d14" containerID="1a82f59d7f5a6b9c54d2110c3e1df017a0d086b9bf11dab205d02c6ddceea25e" exitCode=0 Jan 31 04:34:04 crc kubenswrapper[4667]: I0131 04:34:04.050425 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvtzb" event={"ID":"76e844a6-1298-42aa-87c2-2312d9b74d14","Type":"ContainerDied","Data":"1a82f59d7f5a6b9c54d2110c3e1df017a0d086b9bf11dab205d02c6ddceea25e"} Jan 31 04:34:04 crc kubenswrapper[4667]: I0131 04:34:04.050793 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvtzb" event={"ID":"76e844a6-1298-42aa-87c2-2312d9b74d14","Type":"ContainerDied","Data":"93b27f41b5700e0363becd998d43ad8b0df6be2e8b031b0e2177ab217c30514e"} Jan 31 04:34:04 crc kubenswrapper[4667]: I0131 04:34:04.050824 4667 scope.go:117] "RemoveContainer" containerID="1a82f59d7f5a6b9c54d2110c3e1df017a0d086b9bf11dab205d02c6ddceea25e" Jan 31 04:34:04 crc kubenswrapper[4667]: I0131 04:34:04.050475 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jvtzb" Jan 31 04:34:04 crc kubenswrapper[4667]: I0131 04:34:04.125032 4667 scope.go:117] "RemoveContainer" containerID="af4c6507e1106194c5a29f1046949bde4ba13e137817e4afbf56b29362727584" Jan 31 04:34:04 crc kubenswrapper[4667]: I0131 04:34:04.145670 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jvtzb"] Jan 31 04:34:04 crc kubenswrapper[4667]: I0131 04:34:04.174875 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jvtzb"] Jan 31 04:34:04 crc kubenswrapper[4667]: I0131 04:34:04.178008 4667 scope.go:117] "RemoveContainer" containerID="eb6db36192ef55b2ec88a349388a4d5ded5ffec17457172564348e27a5ab6487" Jan 31 04:34:04 crc kubenswrapper[4667]: I0131 04:34:04.235688 4667 scope.go:117] "RemoveContainer" containerID="1a82f59d7f5a6b9c54d2110c3e1df017a0d086b9bf11dab205d02c6ddceea25e" Jan 31 04:34:04 crc kubenswrapper[4667]: E0131 04:34:04.237234 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a82f59d7f5a6b9c54d2110c3e1df017a0d086b9bf11dab205d02c6ddceea25e\": container with ID starting with 1a82f59d7f5a6b9c54d2110c3e1df017a0d086b9bf11dab205d02c6ddceea25e not found: ID does not exist" containerID="1a82f59d7f5a6b9c54d2110c3e1df017a0d086b9bf11dab205d02c6ddceea25e" Jan 31 04:34:04 crc kubenswrapper[4667]: I0131 04:34:04.237303 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a82f59d7f5a6b9c54d2110c3e1df017a0d086b9bf11dab205d02c6ddceea25e"} err="failed to get container status \"1a82f59d7f5a6b9c54d2110c3e1df017a0d086b9bf11dab205d02c6ddceea25e\": rpc error: code = NotFound desc = could not find container \"1a82f59d7f5a6b9c54d2110c3e1df017a0d086b9bf11dab205d02c6ddceea25e\": container with ID starting with 1a82f59d7f5a6b9c54d2110c3e1df017a0d086b9bf11dab205d02c6ddceea25e not found: ID does not exist" Jan 31 04:34:04 crc kubenswrapper[4667]: I0131 04:34:04.237339 4667 scope.go:117] "RemoveContainer" containerID="af4c6507e1106194c5a29f1046949bde4ba13e137817e4afbf56b29362727584" Jan 31 04:34:04 crc kubenswrapper[4667]: E0131 04:34:04.237969 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af4c6507e1106194c5a29f1046949bde4ba13e137817e4afbf56b29362727584\": container with ID starting with af4c6507e1106194c5a29f1046949bde4ba13e137817e4afbf56b29362727584 not found: ID does not exist" containerID="af4c6507e1106194c5a29f1046949bde4ba13e137817e4afbf56b29362727584" Jan 31 04:34:04 crc kubenswrapper[4667]: I0131 04:34:04.237998 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af4c6507e1106194c5a29f1046949bde4ba13e137817e4afbf56b29362727584"} err="failed to get container status \"af4c6507e1106194c5a29f1046949bde4ba13e137817e4afbf56b29362727584\": rpc error: code = NotFound desc = could not find container \"af4c6507e1106194c5a29f1046949bde4ba13e137817e4afbf56b29362727584\": container with ID starting with af4c6507e1106194c5a29f1046949bde4ba13e137817e4afbf56b29362727584 not found: ID does not exist" Jan 31 04:34:04 crc kubenswrapper[4667]: I0131 04:34:04.238013 4667 scope.go:117] "RemoveContainer" containerID="eb6db36192ef55b2ec88a349388a4d5ded5ffec17457172564348e27a5ab6487" Jan 31 04:34:04 crc kubenswrapper[4667]: E0131 04:34:04.238874 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb6db36192ef55b2ec88a349388a4d5ded5ffec17457172564348e27a5ab6487\": container with ID starting with eb6db36192ef55b2ec88a349388a4d5ded5ffec17457172564348e27a5ab6487 not found: ID does not exist" containerID="eb6db36192ef55b2ec88a349388a4d5ded5ffec17457172564348e27a5ab6487" Jan 31 04:34:04 crc kubenswrapper[4667]: I0131 04:34:04.238898 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb6db36192ef55b2ec88a349388a4d5ded5ffec17457172564348e27a5ab6487"} err="failed to get container status \"eb6db36192ef55b2ec88a349388a4d5ded5ffec17457172564348e27a5ab6487\": rpc error: code = NotFound desc = could not find container \"eb6db36192ef55b2ec88a349388a4d5ded5ffec17457172564348e27a5ab6487\": container with ID starting with eb6db36192ef55b2ec88a349388a4d5ded5ffec17457172564348e27a5ab6487 not found: ID does not exist" Jan 31 04:34:05 crc kubenswrapper[4667]: I0131 04:34:05.298532 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76e844a6-1298-42aa-87c2-2312d9b74d14" path="/var/lib/kubelet/pods/76e844a6-1298-42aa-87c2-2312d9b74d14/volumes" Jan 31 04:34:45 crc kubenswrapper[4667]: I0131 04:34:45.540162 4667 generic.go:334] "Generic (PLEG): container finished" podID="c2249d9c-021c-4dbf-8770-767be19d9404" containerID="b22b72723ec497d945d609064d21113b1a171ab9e3dc8ef7726f6b4b79a81572" exitCode=0 Jan 31 04:34:45 crc kubenswrapper[4667]: I0131 04:34:45.540256 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c" event={"ID":"c2249d9c-021c-4dbf-8770-767be19d9404","Type":"ContainerDied","Data":"b22b72723ec497d945d609064d21113b1a171ab9e3dc8ef7726f6b4b79a81572"} Jan 31 04:34:47 crc kubenswrapper[4667]: I0131 04:34:47.161314 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c" Jan 31 04:34:47 crc kubenswrapper[4667]: I0131 04:34:47.269362 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c2249d9c-021c-4dbf-8770-767be19d9404-ceilometer-compute-config-data-0\") pod \"c2249d9c-021c-4dbf-8770-767be19d9404\" (UID: \"c2249d9c-021c-4dbf-8770-767be19d9404\") " Jan 31 04:34:47 crc kubenswrapper[4667]: I0131 04:34:47.269428 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/c2249d9c-021c-4dbf-8770-767be19d9404-ceilometer-compute-config-data-2\") pod \"c2249d9c-021c-4dbf-8770-767be19d9404\" (UID: \"c2249d9c-021c-4dbf-8770-767be19d9404\") " Jan 31 04:34:47 crc kubenswrapper[4667]: I0131 04:34:47.269512 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c2249d9c-021c-4dbf-8770-767be19d9404-ssh-key-openstack-edpm-ipam\") pod \"c2249d9c-021c-4dbf-8770-767be19d9404\" (UID: \"c2249d9c-021c-4dbf-8770-767be19d9404\") " Jan 31 04:34:47 crc kubenswrapper[4667]: I0131 04:34:47.269592 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c2249d9c-021c-4dbf-8770-767be19d9404-ceilometer-compute-config-data-1\") pod \"c2249d9c-021c-4dbf-8770-767be19d9404\" (UID: \"c2249d9c-021c-4dbf-8770-767be19d9404\") " Jan 31 04:34:47 crc kubenswrapper[4667]: I0131 04:34:47.269640 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c2249d9c-021c-4dbf-8770-767be19d9404-inventory\") pod \"c2249d9c-021c-4dbf-8770-767be19d9404\" (UID: \"c2249d9c-021c-4dbf-8770-767be19d9404\") " Jan 31 04:34:47 crc kubenswrapper[4667]: I0131 04:34:47.269685 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2249d9c-021c-4dbf-8770-767be19d9404-telemetry-combined-ca-bundle\") pod \"c2249d9c-021c-4dbf-8770-767be19d9404\" (UID: \"c2249d9c-021c-4dbf-8770-767be19d9404\") " Jan 31 04:34:47 crc kubenswrapper[4667]: I0131 04:34:47.269817 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l82zj\" (UniqueName: \"kubernetes.io/projected/c2249d9c-021c-4dbf-8770-767be19d9404-kube-api-access-l82zj\") pod \"c2249d9c-021c-4dbf-8770-767be19d9404\" (UID: \"c2249d9c-021c-4dbf-8770-767be19d9404\") " Jan 31 04:34:47 crc kubenswrapper[4667]: I0131 04:34:47.280299 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2249d9c-021c-4dbf-8770-767be19d9404-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "c2249d9c-021c-4dbf-8770-767be19d9404" (UID: "c2249d9c-021c-4dbf-8770-767be19d9404"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:34:47 crc kubenswrapper[4667]: I0131 04:34:47.305291 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2249d9c-021c-4dbf-8770-767be19d9404-kube-api-access-l82zj" (OuterVolumeSpecName: "kube-api-access-l82zj") pod "c2249d9c-021c-4dbf-8770-767be19d9404" (UID: "c2249d9c-021c-4dbf-8770-767be19d9404"). InnerVolumeSpecName "kube-api-access-l82zj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:34:47 crc kubenswrapper[4667]: I0131 04:34:47.323170 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2249d9c-021c-4dbf-8770-767be19d9404-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "c2249d9c-021c-4dbf-8770-767be19d9404" (UID: "c2249d9c-021c-4dbf-8770-767be19d9404"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:34:47 crc kubenswrapper[4667]: I0131 04:34:47.327092 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2249d9c-021c-4dbf-8770-767be19d9404-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "c2249d9c-021c-4dbf-8770-767be19d9404" (UID: "c2249d9c-021c-4dbf-8770-767be19d9404"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:34:47 crc kubenswrapper[4667]: I0131 04:34:47.331477 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2249d9c-021c-4dbf-8770-767be19d9404-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c2249d9c-021c-4dbf-8770-767be19d9404" (UID: "c2249d9c-021c-4dbf-8770-767be19d9404"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:34:47 crc kubenswrapper[4667]: I0131 04:34:47.340054 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2249d9c-021c-4dbf-8770-767be19d9404-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "c2249d9c-021c-4dbf-8770-767be19d9404" (UID: "c2249d9c-021c-4dbf-8770-767be19d9404"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:34:47 crc kubenswrapper[4667]: I0131 04:34:47.342294 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2249d9c-021c-4dbf-8770-767be19d9404-inventory" (OuterVolumeSpecName: "inventory") pod "c2249d9c-021c-4dbf-8770-767be19d9404" (UID: "c2249d9c-021c-4dbf-8770-767be19d9404"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:34:47 crc kubenswrapper[4667]: I0131 04:34:47.374008 4667 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c2249d9c-021c-4dbf-8770-767be19d9404-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 31 04:34:47 crc kubenswrapper[4667]: I0131 04:34:47.374040 4667 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c2249d9c-021c-4dbf-8770-767be19d9404-inventory\") on node \"crc\" DevicePath \"\"" Jan 31 04:34:47 crc kubenswrapper[4667]: I0131 04:34:47.374052 4667 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2249d9c-021c-4dbf-8770-767be19d9404-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 04:34:47 crc kubenswrapper[4667]: I0131 04:34:47.374064 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l82zj\" (UniqueName: \"kubernetes.io/projected/c2249d9c-021c-4dbf-8770-767be19d9404-kube-api-access-l82zj\") on node \"crc\" DevicePath \"\"" Jan 31 04:34:47 crc kubenswrapper[4667]: I0131 04:34:47.374075 4667 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c2249d9c-021c-4dbf-8770-767be19d9404-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 31 04:34:47 crc kubenswrapper[4667]: I0131 04:34:47.374085 4667 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/c2249d9c-021c-4dbf-8770-767be19d9404-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 31 04:34:47 crc kubenswrapper[4667]: I0131 04:34:47.374094 4667 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c2249d9c-021c-4dbf-8770-767be19d9404-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 31 04:34:47 crc kubenswrapper[4667]: I0131 04:34:47.570188 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c" event={"ID":"c2249d9c-021c-4dbf-8770-767be19d9404","Type":"ContainerDied","Data":"2a3c2a2975f659b7b8e67aeaf5c71a7ebda7eb5df0f6f9ea479090ab8c840ef3"} Jan 31 04:34:47 crc kubenswrapper[4667]: I0131 04:34:47.570730 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a3c2a2975f659b7b8e67aeaf5c71a7ebda7eb5df0f6f9ea479090ab8c840ef3" Jan 31 04:34:47 crc kubenswrapper[4667]: I0131 04:34:47.570300 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c" Jan 31 04:34:55 crc kubenswrapper[4667]: I0131 04:34:55.911582 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-49hhv"] Jan 31 04:34:55 crc kubenswrapper[4667]: E0131 04:34:55.912690 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76e844a6-1298-42aa-87c2-2312d9b74d14" containerName="registry-server" Jan 31 04:34:55 crc kubenswrapper[4667]: I0131 04:34:55.912708 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="76e844a6-1298-42aa-87c2-2312d9b74d14" containerName="registry-server" Jan 31 04:34:55 crc kubenswrapper[4667]: E0131 04:34:55.912738 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2249d9c-021c-4dbf-8770-767be19d9404" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 31 04:34:55 crc kubenswrapper[4667]: I0131 04:34:55.912748 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2249d9c-021c-4dbf-8770-767be19d9404" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 31 04:34:55 crc kubenswrapper[4667]: E0131 04:34:55.912766 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76e844a6-1298-42aa-87c2-2312d9b74d14" containerName="extract-utilities" Jan 31 04:34:55 crc kubenswrapper[4667]: I0131 04:34:55.912773 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="76e844a6-1298-42aa-87c2-2312d9b74d14" containerName="extract-utilities" Jan 31 04:34:55 crc kubenswrapper[4667]: E0131 04:34:55.912794 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76e844a6-1298-42aa-87c2-2312d9b74d14" containerName="extract-content" Jan 31 04:34:55 crc kubenswrapper[4667]: I0131 04:34:55.912801 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="76e844a6-1298-42aa-87c2-2312d9b74d14" containerName="extract-content" Jan 31 04:34:55 crc kubenswrapper[4667]: I0131 04:34:55.913060 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2249d9c-021c-4dbf-8770-767be19d9404" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 31 04:34:55 crc kubenswrapper[4667]: I0131 04:34:55.913086 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="76e844a6-1298-42aa-87c2-2312d9b74d14" containerName="registry-server" Jan 31 04:34:55 crc kubenswrapper[4667]: I0131 04:34:55.915653 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-49hhv" Jan 31 04:34:55 crc kubenswrapper[4667]: I0131 04:34:55.958294 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-49hhv"] Jan 31 04:34:55 crc kubenswrapper[4667]: I0131 04:34:55.994336 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b2abc09-a368-48c8-a0f0-b900c5bb9be1-catalog-content\") pod \"certified-operators-49hhv\" (UID: \"9b2abc09-a368-48c8-a0f0-b900c5bb9be1\") " pod="openshift-marketplace/certified-operators-49hhv" Jan 31 04:34:55 crc kubenswrapper[4667]: I0131 04:34:55.994426 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b2abc09-a368-48c8-a0f0-b900c5bb9be1-utilities\") pod \"certified-operators-49hhv\" (UID: \"9b2abc09-a368-48c8-a0f0-b900c5bb9be1\") " pod="openshift-marketplace/certified-operators-49hhv" Jan 31 04:34:55 crc kubenswrapper[4667]: I0131 04:34:55.994539 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj66r\" (UniqueName: \"kubernetes.io/projected/9b2abc09-a368-48c8-a0f0-b900c5bb9be1-kube-api-access-bj66r\") pod \"certified-operators-49hhv\" (UID: \"9b2abc09-a368-48c8-a0f0-b900c5bb9be1\") " pod="openshift-marketplace/certified-operators-49hhv" Jan 31 04:34:56 crc kubenswrapper[4667]: I0131 04:34:56.097024 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b2abc09-a368-48c8-a0f0-b900c5bb9be1-catalog-content\") pod \"certified-operators-49hhv\" (UID: \"9b2abc09-a368-48c8-a0f0-b900c5bb9be1\") " pod="openshift-marketplace/certified-operators-49hhv" Jan 31 04:34:56 crc kubenswrapper[4667]: I0131 04:34:56.097186 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b2abc09-a368-48c8-a0f0-b900c5bb9be1-utilities\") pod \"certified-operators-49hhv\" (UID: \"9b2abc09-a368-48c8-a0f0-b900c5bb9be1\") " pod="openshift-marketplace/certified-operators-49hhv" Jan 31 04:34:56 crc kubenswrapper[4667]: I0131 04:34:56.097479 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj66r\" (UniqueName: \"kubernetes.io/projected/9b2abc09-a368-48c8-a0f0-b900c5bb9be1-kube-api-access-bj66r\") pod \"certified-operators-49hhv\" (UID: \"9b2abc09-a368-48c8-a0f0-b900c5bb9be1\") " pod="openshift-marketplace/certified-operators-49hhv" Jan 31 04:34:56 crc kubenswrapper[4667]: I0131 04:34:56.097709 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b2abc09-a368-48c8-a0f0-b900c5bb9be1-utilities\") pod \"certified-operators-49hhv\" (UID: \"9b2abc09-a368-48c8-a0f0-b900c5bb9be1\") " pod="openshift-marketplace/certified-operators-49hhv" Jan 31 04:34:56 crc kubenswrapper[4667]: I0131 04:34:56.097755 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b2abc09-a368-48c8-a0f0-b900c5bb9be1-catalog-content\") pod \"certified-operators-49hhv\" (UID: \"9b2abc09-a368-48c8-a0f0-b900c5bb9be1\") " pod="openshift-marketplace/certified-operators-49hhv" Jan 31 04:34:56 crc kubenswrapper[4667]: I0131 04:34:56.122576 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj66r\" (UniqueName: \"kubernetes.io/projected/9b2abc09-a368-48c8-a0f0-b900c5bb9be1-kube-api-access-bj66r\") pod \"certified-operators-49hhv\" (UID: \"9b2abc09-a368-48c8-a0f0-b900c5bb9be1\") " pod="openshift-marketplace/certified-operators-49hhv" Jan 31 04:34:56 crc kubenswrapper[4667]: I0131 04:34:56.258891 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-49hhv" Jan 31 04:34:57 crc kubenswrapper[4667]: I0131 04:34:57.504225 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-49hhv"] Jan 31 04:34:57 crc kubenswrapper[4667]: I0131 04:34:57.674629 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-49hhv" event={"ID":"9b2abc09-a368-48c8-a0f0-b900c5bb9be1","Type":"ContainerStarted","Data":"7f7140f1134d28bd7d2e77250a53ce79f8e78ae92c3a0ad1fcf42776ed9389cb"} Jan 31 04:34:58 crc kubenswrapper[4667]: I0131 04:34:58.689178 4667 generic.go:334] "Generic (PLEG): container finished" podID="9b2abc09-a368-48c8-a0f0-b900c5bb9be1" containerID="536fbe598d0b9f8c655a31f9316c1ac0748c3de2f795030f3d11430dba02bb86" exitCode=0 Jan 31 04:34:58 crc kubenswrapper[4667]: I0131 04:34:58.689872 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-49hhv" event={"ID":"9b2abc09-a368-48c8-a0f0-b900c5bb9be1","Type":"ContainerDied","Data":"536fbe598d0b9f8c655a31f9316c1ac0748c3de2f795030f3d11430dba02bb86"} Jan 31 04:34:58 crc kubenswrapper[4667]: I0131 04:34:58.692858 4667 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 04:34:59 crc kubenswrapper[4667]: I0131 04:34:59.701994 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-49hhv" event={"ID":"9b2abc09-a368-48c8-a0f0-b900c5bb9be1","Type":"ContainerStarted","Data":"6d40260f4e79c7dce9758d71a34171c3f2eba0f8d68ec2557ad919105c1e20a8"} Jan 31 04:35:01 crc kubenswrapper[4667]: I0131 04:35:01.727121 4667 generic.go:334] "Generic (PLEG): container finished" podID="9b2abc09-a368-48c8-a0f0-b900c5bb9be1" containerID="6d40260f4e79c7dce9758d71a34171c3f2eba0f8d68ec2557ad919105c1e20a8" exitCode=0 Jan 31 04:35:01 crc kubenswrapper[4667]: I0131 04:35:01.727203 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-49hhv" event={"ID":"9b2abc09-a368-48c8-a0f0-b900c5bb9be1","Type":"ContainerDied","Data":"6d40260f4e79c7dce9758d71a34171c3f2eba0f8d68ec2557ad919105c1e20a8"} Jan 31 04:35:02 crc kubenswrapper[4667]: I0131 04:35:02.741880 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-49hhv" event={"ID":"9b2abc09-a368-48c8-a0f0-b900c5bb9be1","Type":"ContainerStarted","Data":"e3ae243bfac0885e2c2094fb2ec9815cf78253821e24d021477c6df3c069fb25"} Jan 31 04:35:02 crc kubenswrapper[4667]: I0131 04:35:02.777574 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-49hhv" podStartSLOduration=4.350443139 podStartE2EDuration="7.777553324s" podCreationTimestamp="2026-01-31 04:34:55 +0000 UTC" firstStartedPulling="2026-01-31 04:34:58.692526203 +0000 UTC m=+2822.208861502" lastFinishedPulling="2026-01-31 04:35:02.119636378 +0000 UTC m=+2825.635971687" observedRunningTime="2026-01-31 04:35:02.769336118 +0000 UTC m=+2826.285671427" watchObservedRunningTime="2026-01-31 04:35:02.777553324 +0000 UTC m=+2826.293888633" Jan 31 04:35:06 crc kubenswrapper[4667]: I0131 04:35:06.259894 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-49hhv" Jan 31 04:35:06 crc kubenswrapper[4667]: I0131 04:35:06.260815 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-49hhv" Jan 31 04:35:06 crc kubenswrapper[4667]: I0131 04:35:06.347114 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-49hhv" Jan 31 04:35:16 crc kubenswrapper[4667]: I0131 04:35:16.334460 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-49hhv" Jan 31 04:35:16 crc kubenswrapper[4667]: I0131 04:35:16.410713 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-49hhv"] Jan 31 04:35:16 crc kubenswrapper[4667]: I0131 04:35:16.911018 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-49hhv" podUID="9b2abc09-a368-48c8-a0f0-b900c5bb9be1" containerName="registry-server" containerID="cri-o://e3ae243bfac0885e2c2094fb2ec9815cf78253821e24d021477c6df3c069fb25" gracePeriod=2 Jan 31 04:35:17 crc kubenswrapper[4667]: I0131 04:35:17.415654 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-49hhv" Jan 31 04:35:17 crc kubenswrapper[4667]: I0131 04:35:17.578470 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b2abc09-a368-48c8-a0f0-b900c5bb9be1-utilities\") pod \"9b2abc09-a368-48c8-a0f0-b900c5bb9be1\" (UID: \"9b2abc09-a368-48c8-a0f0-b900c5bb9be1\") " Jan 31 04:35:17 crc kubenswrapper[4667]: I0131 04:35:17.578942 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bj66r\" (UniqueName: \"kubernetes.io/projected/9b2abc09-a368-48c8-a0f0-b900c5bb9be1-kube-api-access-bj66r\") pod \"9b2abc09-a368-48c8-a0f0-b900c5bb9be1\" (UID: \"9b2abc09-a368-48c8-a0f0-b900c5bb9be1\") " Jan 31 04:35:17 crc kubenswrapper[4667]: I0131 04:35:17.579074 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b2abc09-a368-48c8-a0f0-b900c5bb9be1-catalog-content\") pod \"9b2abc09-a368-48c8-a0f0-b900c5bb9be1\" (UID: \"9b2abc09-a368-48c8-a0f0-b900c5bb9be1\") " Jan 31 04:35:17 crc kubenswrapper[4667]: I0131 04:35:17.579576 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b2abc09-a368-48c8-a0f0-b900c5bb9be1-utilities" (OuterVolumeSpecName: "utilities") pod "9b2abc09-a368-48c8-a0f0-b900c5bb9be1" (UID: "9b2abc09-a368-48c8-a0f0-b900c5bb9be1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:35:17 crc kubenswrapper[4667]: I0131 04:35:17.580026 4667 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b2abc09-a368-48c8-a0f0-b900c5bb9be1-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:35:17 crc kubenswrapper[4667]: I0131 04:35:17.585879 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b2abc09-a368-48c8-a0f0-b900c5bb9be1-kube-api-access-bj66r" (OuterVolumeSpecName: "kube-api-access-bj66r") pod "9b2abc09-a368-48c8-a0f0-b900c5bb9be1" (UID: "9b2abc09-a368-48c8-a0f0-b900c5bb9be1"). InnerVolumeSpecName "kube-api-access-bj66r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:35:17 crc kubenswrapper[4667]: I0131 04:35:17.648105 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b2abc09-a368-48c8-a0f0-b900c5bb9be1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9b2abc09-a368-48c8-a0f0-b900c5bb9be1" (UID: "9b2abc09-a368-48c8-a0f0-b900c5bb9be1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:35:17 crc kubenswrapper[4667]: I0131 04:35:17.681912 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bj66r\" (UniqueName: \"kubernetes.io/projected/9b2abc09-a368-48c8-a0f0-b900c5bb9be1-kube-api-access-bj66r\") on node \"crc\" DevicePath \"\"" Jan 31 04:35:17 crc kubenswrapper[4667]: I0131 04:35:17.681954 4667 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b2abc09-a368-48c8-a0f0-b900c5bb9be1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:35:17 crc kubenswrapper[4667]: I0131 04:35:17.933100 4667 generic.go:334] "Generic (PLEG): container finished" podID="9b2abc09-a368-48c8-a0f0-b900c5bb9be1" containerID="e3ae243bfac0885e2c2094fb2ec9815cf78253821e24d021477c6df3c069fb25" exitCode=0 Jan 31 04:35:17 crc kubenswrapper[4667]: I0131 04:35:17.933178 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-49hhv" event={"ID":"9b2abc09-a368-48c8-a0f0-b900c5bb9be1","Type":"ContainerDied","Data":"e3ae243bfac0885e2c2094fb2ec9815cf78253821e24d021477c6df3c069fb25"} Jan 31 04:35:17 crc kubenswrapper[4667]: I0131 04:35:17.933198 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-49hhv" Jan 31 04:35:17 crc kubenswrapper[4667]: I0131 04:35:17.933228 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-49hhv" event={"ID":"9b2abc09-a368-48c8-a0f0-b900c5bb9be1","Type":"ContainerDied","Data":"7f7140f1134d28bd7d2e77250a53ce79f8e78ae92c3a0ad1fcf42776ed9389cb"} Jan 31 04:35:17 crc kubenswrapper[4667]: I0131 04:35:17.933257 4667 scope.go:117] "RemoveContainer" containerID="e3ae243bfac0885e2c2094fb2ec9815cf78253821e24d021477c6df3c069fb25" Jan 31 04:35:17 crc kubenswrapper[4667]: I0131 04:35:17.972370 4667 scope.go:117] "RemoveContainer" containerID="6d40260f4e79c7dce9758d71a34171c3f2eba0f8d68ec2557ad919105c1e20a8" Jan 31 04:35:17 crc kubenswrapper[4667]: I0131 04:35:17.976388 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-49hhv"] Jan 31 04:35:18 crc kubenswrapper[4667]: I0131 04:35:18.005824 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-49hhv"] Jan 31 04:35:18 crc kubenswrapper[4667]: I0131 04:35:18.016629 4667 scope.go:117] "RemoveContainer" containerID="536fbe598d0b9f8c655a31f9316c1ac0748c3de2f795030f3d11430dba02bb86" Jan 31 04:35:18 crc kubenswrapper[4667]: I0131 04:35:18.089053 4667 scope.go:117] "RemoveContainer" containerID="e3ae243bfac0885e2c2094fb2ec9815cf78253821e24d021477c6df3c069fb25" Jan 31 04:35:18 crc kubenswrapper[4667]: E0131 04:35:18.094230 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3ae243bfac0885e2c2094fb2ec9815cf78253821e24d021477c6df3c069fb25\": container with ID starting with e3ae243bfac0885e2c2094fb2ec9815cf78253821e24d021477c6df3c069fb25 not found: ID does not exist" containerID="e3ae243bfac0885e2c2094fb2ec9815cf78253821e24d021477c6df3c069fb25" Jan 31 04:35:18 crc kubenswrapper[4667]: I0131 04:35:18.094289 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3ae243bfac0885e2c2094fb2ec9815cf78253821e24d021477c6df3c069fb25"} err="failed to get container status \"e3ae243bfac0885e2c2094fb2ec9815cf78253821e24d021477c6df3c069fb25\": rpc error: code = NotFound desc = could not find container \"e3ae243bfac0885e2c2094fb2ec9815cf78253821e24d021477c6df3c069fb25\": container with ID starting with e3ae243bfac0885e2c2094fb2ec9815cf78253821e24d021477c6df3c069fb25 not found: ID does not exist" Jan 31 04:35:18 crc kubenswrapper[4667]: I0131 04:35:18.094326 4667 scope.go:117] "RemoveContainer" containerID="6d40260f4e79c7dce9758d71a34171c3f2eba0f8d68ec2557ad919105c1e20a8" Jan 31 04:35:18 crc kubenswrapper[4667]: E0131 04:35:18.094905 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d40260f4e79c7dce9758d71a34171c3f2eba0f8d68ec2557ad919105c1e20a8\": container with ID starting with 6d40260f4e79c7dce9758d71a34171c3f2eba0f8d68ec2557ad919105c1e20a8 not found: ID does not exist" containerID="6d40260f4e79c7dce9758d71a34171c3f2eba0f8d68ec2557ad919105c1e20a8" Jan 31 04:35:18 crc kubenswrapper[4667]: I0131 04:35:18.094936 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d40260f4e79c7dce9758d71a34171c3f2eba0f8d68ec2557ad919105c1e20a8"} err="failed to get container status \"6d40260f4e79c7dce9758d71a34171c3f2eba0f8d68ec2557ad919105c1e20a8\": rpc error: code = NotFound desc = could not find container \"6d40260f4e79c7dce9758d71a34171c3f2eba0f8d68ec2557ad919105c1e20a8\": container with ID starting with 6d40260f4e79c7dce9758d71a34171c3f2eba0f8d68ec2557ad919105c1e20a8 not found: ID does not exist" Jan 31 04:35:18 crc kubenswrapper[4667]: I0131 04:35:18.094955 4667 scope.go:117] "RemoveContainer" containerID="536fbe598d0b9f8c655a31f9316c1ac0748c3de2f795030f3d11430dba02bb86" Jan 31 04:35:18 crc kubenswrapper[4667]: E0131 04:35:18.095217 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"536fbe598d0b9f8c655a31f9316c1ac0748c3de2f795030f3d11430dba02bb86\": container with ID starting with 536fbe598d0b9f8c655a31f9316c1ac0748c3de2f795030f3d11430dba02bb86 not found: ID does not exist" containerID="536fbe598d0b9f8c655a31f9316c1ac0748c3de2f795030f3d11430dba02bb86" Jan 31 04:35:18 crc kubenswrapper[4667]: I0131 04:35:18.095242 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"536fbe598d0b9f8c655a31f9316c1ac0748c3de2f795030f3d11430dba02bb86"} err="failed to get container status \"536fbe598d0b9f8c655a31f9316c1ac0748c3de2f795030f3d11430dba02bb86\": rpc error: code = NotFound desc = could not find container \"536fbe598d0b9f8c655a31f9316c1ac0748c3de2f795030f3d11430dba02bb86\": container with ID starting with 536fbe598d0b9f8c655a31f9316c1ac0748c3de2f795030f3d11430dba02bb86 not found: ID does not exist" Jan 31 04:35:19 crc kubenswrapper[4667]: I0131 04:35:19.293031 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b2abc09-a368-48c8-a0f0-b900c5bb9be1" path="/var/lib/kubelet/pods/9b2abc09-a368-48c8-a0f0-b900c5bb9be1/volumes" Jan 31 04:35:34 crc kubenswrapper[4667]: I0131 04:35:33.949630 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 31 04:35:34 crc kubenswrapper[4667]: E0131 04:35:33.970562 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b2abc09-a368-48c8-a0f0-b900c5bb9be1" containerName="extract-content" Jan 31 04:35:34 crc kubenswrapper[4667]: I0131 04:35:33.970590 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b2abc09-a368-48c8-a0f0-b900c5bb9be1" containerName="extract-content" Jan 31 04:35:34 crc kubenswrapper[4667]: E0131 04:35:33.970611 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b2abc09-a368-48c8-a0f0-b900c5bb9be1" containerName="registry-server" Jan 31 04:35:34 crc kubenswrapper[4667]: I0131 04:35:33.970619 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b2abc09-a368-48c8-a0f0-b900c5bb9be1" containerName="registry-server" Jan 31 04:35:34 crc kubenswrapper[4667]: E0131 04:35:33.970645 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b2abc09-a368-48c8-a0f0-b900c5bb9be1" containerName="extract-utilities" Jan 31 04:35:34 crc kubenswrapper[4667]: I0131 04:35:33.970653 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b2abc09-a368-48c8-a0f0-b900c5bb9be1" containerName="extract-utilities" Jan 31 04:35:34 crc kubenswrapper[4667]: I0131 04:35:33.970942 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b2abc09-a368-48c8-a0f0-b900c5bb9be1" containerName="registry-server" Jan 31 04:35:34 crc kubenswrapper[4667]: I0131 04:35:33.971686 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 31 04:35:34 crc kubenswrapper[4667]: I0131 04:35:33.971794 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 31 04:35:34 crc kubenswrapper[4667]: I0131 04:35:33.977713 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 31 04:35:34 crc kubenswrapper[4667]: I0131 04:35:33.977810 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-cnjqs" Jan 31 04:35:34 crc kubenswrapper[4667]: I0131 04:35:33.978140 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 31 04:35:34 crc kubenswrapper[4667]: I0131 04:35:33.978211 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 31 04:35:34 crc kubenswrapper[4667]: I0131 04:35:34.054099 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\") " pod="openstack/tempest-tests-tempest" Jan 31 04:35:34 crc kubenswrapper[4667]: I0131 04:35:34.054186 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\") " pod="openstack/tempest-tests-tempest" Jan 31 04:35:34 crc kubenswrapper[4667]: I0131 04:35:34.054250 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\") " pod="openstack/tempest-tests-tempest" Jan 31 04:35:34 crc kubenswrapper[4667]: I0131 04:35:34.054293 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\") " pod="openstack/tempest-tests-tempest" Jan 31 04:35:34 crc kubenswrapper[4667]: I0131 04:35:34.054331 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\") " pod="openstack/tempest-tests-tempest" Jan 31 04:35:34 crc kubenswrapper[4667]: I0131 04:35:34.054375 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-config-data\") pod \"tempest-tests-tempest\" (UID: \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\") " pod="openstack/tempest-tests-tempest" Jan 31 04:35:34 crc kubenswrapper[4667]: I0131 04:35:34.054465 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knzdb\" (UniqueName: \"kubernetes.io/projected/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-kube-api-access-knzdb\") pod \"tempest-tests-tempest\" (UID: \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\") " pod="openstack/tempest-tests-tempest" Jan 31 04:35:34 crc kubenswrapper[4667]: I0131 04:35:34.054499 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\") " pod="openstack/tempest-tests-tempest" Jan 31 04:35:34 crc kubenswrapper[4667]: I0131 04:35:34.054558 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\") " pod="openstack/tempest-tests-tempest" Jan 31 04:35:34 crc kubenswrapper[4667]: I0131 04:35:34.157341 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\") " pod="openstack/tempest-tests-tempest" Jan 31 04:35:34 crc kubenswrapper[4667]: I0131 04:35:34.157440 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\") " pod="openstack/tempest-tests-tempest" Jan 31 04:35:34 crc kubenswrapper[4667]: I0131 04:35:34.157488 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\") " pod="openstack/tempest-tests-tempest" Jan 31 04:35:34 crc kubenswrapper[4667]: I0131 04:35:34.157520 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\") " pod="openstack/tempest-tests-tempest" Jan 31 04:35:34 crc kubenswrapper[4667]: I0131 04:35:34.157557 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-config-data\") pod \"tempest-tests-tempest\" (UID: \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\") " pod="openstack/tempest-tests-tempest" Jan 31 04:35:34 crc kubenswrapper[4667]: I0131 04:35:34.157644 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knzdb\" (UniqueName: \"kubernetes.io/projected/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-kube-api-access-knzdb\") pod \"tempest-tests-tempest\" (UID: \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\") " pod="openstack/tempest-tests-tempest" Jan 31 04:35:34 crc kubenswrapper[4667]: I0131 04:35:34.157668 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\") " pod="openstack/tempest-tests-tempest" Jan 31 04:35:34 crc kubenswrapper[4667]: I0131 04:35:34.157711 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\") " pod="openstack/tempest-tests-tempest" Jan 31 04:35:34 crc kubenswrapper[4667]: I0131 04:35:34.157819 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\") " pod="openstack/tempest-tests-tempest" Jan 31 04:35:34 crc kubenswrapper[4667]: I0131 04:35:34.158005 4667 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/tempest-tests-tempest" Jan 31 04:35:34 crc kubenswrapper[4667]: I0131 04:35:34.158213 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\") " pod="openstack/tempest-tests-tempest" Jan 31 04:35:34 crc kubenswrapper[4667]: I0131 04:35:34.158609 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\") " pod="openstack/tempest-tests-tempest" Jan 31 04:35:34 crc kubenswrapper[4667]: I0131 04:35:34.159551 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\") " pod="openstack/tempest-tests-tempest" Jan 31 04:35:34 crc kubenswrapper[4667]: I0131 04:35:34.160038 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-config-data\") pod \"tempest-tests-tempest\" (UID: \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\") " pod="openstack/tempest-tests-tempest" Jan 31 04:35:34 crc kubenswrapper[4667]: I0131 04:35:34.176295 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\") " pod="openstack/tempest-tests-tempest" Jan 31 04:35:34 crc kubenswrapper[4667]: I0131 04:35:34.181400 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\") " pod="openstack/tempest-tests-tempest" Jan 31 04:35:34 crc kubenswrapper[4667]: I0131 04:35:34.181888 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knzdb\" (UniqueName: \"kubernetes.io/projected/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-kube-api-access-knzdb\") pod \"tempest-tests-tempest\" (UID: \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\") " pod="openstack/tempest-tests-tempest" Jan 31 04:35:34 crc kubenswrapper[4667]: I0131 04:35:34.182903 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\") " pod="openstack/tempest-tests-tempest" Jan 31 04:35:34 crc kubenswrapper[4667]: I0131 04:35:34.194052 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\") " pod="openstack/tempest-tests-tempest" Jan 31 04:35:34 crc kubenswrapper[4667]: I0131 04:35:34.330381 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 31 04:35:34 crc kubenswrapper[4667]: I0131 04:35:34.870963 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 31 04:35:35 crc kubenswrapper[4667]: I0131 04:35:35.117231 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1","Type":"ContainerStarted","Data":"2d44fd6a9b1a57ec87ea8058a01d9bed0bf8581e40a1b515cd5aeefa586693ce"} Jan 31 04:35:45 crc kubenswrapper[4667]: I0131 04:35:45.704413 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:35:45 crc kubenswrapper[4667]: I0131 04:35:45.705559 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:36:15 crc kubenswrapper[4667]: I0131 04:36:15.704822 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:36:15 crc kubenswrapper[4667]: I0131 04:36:15.705708 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:36:19 crc kubenswrapper[4667]: E0131 04:36:19.163344 4667 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 31 04:36:19 crc kubenswrapper[4667]: E0131 04:36:19.168458 4667 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-knzdb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(6f4da9b8-1fb2-4d7c-b933-d5749919e9d1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 31 04:36:19 crc kubenswrapper[4667]: E0131 04:36:19.169823 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="6f4da9b8-1fb2-4d7c-b933-d5749919e9d1" Jan 31 04:36:19 crc kubenswrapper[4667]: E0131 04:36:19.656149 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="6f4da9b8-1fb2-4d7c-b933-d5749919e9d1" Jan 31 04:36:35 crc kubenswrapper[4667]: I0131 04:36:35.800695 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 31 04:36:37 crc kubenswrapper[4667]: I0131 04:36:37.919387 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1","Type":"ContainerStarted","Data":"2d2dc467d30094a30d699ff8db5f32b1d7fa9889db4f92a313c5bbd913b32f84"} Jan 31 04:36:37 crc kubenswrapper[4667]: I0131 04:36:37.951937 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=5.045212163 podStartE2EDuration="1m5.951918314s" podCreationTimestamp="2026-01-31 04:35:32 +0000 UTC" firstStartedPulling="2026-01-31 04:35:34.889437088 +0000 UTC m=+2858.405772387" lastFinishedPulling="2026-01-31 04:36:35.796143199 +0000 UTC m=+2919.312478538" observedRunningTime="2026-01-31 04:36:37.944403997 +0000 UTC m=+2921.460739296" watchObservedRunningTime="2026-01-31 04:36:37.951918314 +0000 UTC m=+2921.468253613" Jan 31 04:36:45 crc kubenswrapper[4667]: I0131 04:36:45.704198 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:36:45 crc kubenswrapper[4667]: I0131 04:36:45.705258 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:36:45 crc kubenswrapper[4667]: I0131 04:36:45.705328 4667 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" Jan 31 04:36:45 crc kubenswrapper[4667]: I0131 04:36:45.706549 4667 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"419bf0d7a3fec41291c94c017831c9dce37119879fd8ba11b94215270b777f4e"} pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 04:36:45 crc kubenswrapper[4667]: I0131 04:36:45.706630 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" containerID="cri-o://419bf0d7a3fec41291c94c017831c9dce37119879fd8ba11b94215270b777f4e" gracePeriod=600 Jan 31 04:36:46 crc kubenswrapper[4667]: I0131 04:36:46.011752 4667 generic.go:334] "Generic (PLEG): container finished" podID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerID="419bf0d7a3fec41291c94c017831c9dce37119879fd8ba11b94215270b777f4e" exitCode=0 Jan 31 04:36:46 crc kubenswrapper[4667]: I0131 04:36:46.011852 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" event={"ID":"b103bbd2-fb5d-4b2a-8b01-c32f699757df","Type":"ContainerDied","Data":"419bf0d7a3fec41291c94c017831c9dce37119879fd8ba11b94215270b777f4e"} Jan 31 04:36:46 crc kubenswrapper[4667]: I0131 04:36:46.012179 4667 scope.go:117] "RemoveContainer" containerID="c179b2f38e008b6a7310f1984b183ee74fb222f8cc8019eb62046e0a9a89867f" Jan 31 04:36:47 crc kubenswrapper[4667]: I0131 04:36:47.027500 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" event={"ID":"b103bbd2-fb5d-4b2a-8b01-c32f699757df","Type":"ContainerStarted","Data":"4a024f5baac5fed9fbfd2275beffa02329fe0d56f2f10277fb0cd58a753b185a"} Jan 31 04:39:15 crc kubenswrapper[4667]: I0131 04:39:15.704669 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:39:15 crc kubenswrapper[4667]: I0131 04:39:15.705391 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:39:45 crc kubenswrapper[4667]: I0131 04:39:45.705091 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:39:45 crc kubenswrapper[4667]: I0131 04:39:45.706073 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:39:55 crc kubenswrapper[4667]: I0131 04:39:55.351720 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d2tnc"] Jan 31 04:39:55 crc kubenswrapper[4667]: I0131 04:39:55.355463 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d2tnc" Jan 31 04:39:55 crc kubenswrapper[4667]: I0131 04:39:55.391803 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d2tnc"] Jan 31 04:39:55 crc kubenswrapper[4667]: I0131 04:39:55.479510 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13f4a06e-6a23-4395-9857-b226e23840a6-catalog-content\") pod \"community-operators-d2tnc\" (UID: \"13f4a06e-6a23-4395-9857-b226e23840a6\") " pod="openshift-marketplace/community-operators-d2tnc" Jan 31 04:39:55 crc kubenswrapper[4667]: I0131 04:39:55.480235 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xndl\" (UniqueName: \"kubernetes.io/projected/13f4a06e-6a23-4395-9857-b226e23840a6-kube-api-access-8xndl\") pod \"community-operators-d2tnc\" (UID: \"13f4a06e-6a23-4395-9857-b226e23840a6\") " pod="openshift-marketplace/community-operators-d2tnc" Jan 31 04:39:55 crc kubenswrapper[4667]: I0131 04:39:55.480421 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13f4a06e-6a23-4395-9857-b226e23840a6-utilities\") pod \"community-operators-d2tnc\" (UID: \"13f4a06e-6a23-4395-9857-b226e23840a6\") " pod="openshift-marketplace/community-operators-d2tnc" Jan 31 04:39:55 crc kubenswrapper[4667]: I0131 04:39:55.583445 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xndl\" (UniqueName: \"kubernetes.io/projected/13f4a06e-6a23-4395-9857-b226e23840a6-kube-api-access-8xndl\") pod \"community-operators-d2tnc\" (UID: \"13f4a06e-6a23-4395-9857-b226e23840a6\") " pod="openshift-marketplace/community-operators-d2tnc" Jan 31 04:39:55 crc kubenswrapper[4667]: I0131 04:39:55.583685 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13f4a06e-6a23-4395-9857-b226e23840a6-utilities\") pod \"community-operators-d2tnc\" (UID: \"13f4a06e-6a23-4395-9857-b226e23840a6\") " pod="openshift-marketplace/community-operators-d2tnc" Jan 31 04:39:55 crc kubenswrapper[4667]: I0131 04:39:55.583821 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13f4a06e-6a23-4395-9857-b226e23840a6-catalog-content\") pod \"community-operators-d2tnc\" (UID: \"13f4a06e-6a23-4395-9857-b226e23840a6\") " pod="openshift-marketplace/community-operators-d2tnc" Jan 31 04:39:55 crc kubenswrapper[4667]: I0131 04:39:55.584464 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13f4a06e-6a23-4395-9857-b226e23840a6-utilities\") pod \"community-operators-d2tnc\" (UID: \"13f4a06e-6a23-4395-9857-b226e23840a6\") " pod="openshift-marketplace/community-operators-d2tnc" Jan 31 04:39:55 crc kubenswrapper[4667]: I0131 04:39:55.584644 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13f4a06e-6a23-4395-9857-b226e23840a6-catalog-content\") pod \"community-operators-d2tnc\" (UID: \"13f4a06e-6a23-4395-9857-b226e23840a6\") " pod="openshift-marketplace/community-operators-d2tnc" Jan 31 04:39:55 crc kubenswrapper[4667]: I0131 04:39:55.609540 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xndl\" (UniqueName: \"kubernetes.io/projected/13f4a06e-6a23-4395-9857-b226e23840a6-kube-api-access-8xndl\") pod \"community-operators-d2tnc\" (UID: \"13f4a06e-6a23-4395-9857-b226e23840a6\") " pod="openshift-marketplace/community-operators-d2tnc" Jan 31 04:39:55 crc kubenswrapper[4667]: I0131 04:39:55.702399 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d2tnc" Jan 31 04:39:56 crc kubenswrapper[4667]: I0131 04:39:56.746778 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d2tnc"] Jan 31 04:39:57 crc kubenswrapper[4667]: I0131 04:39:57.449630 4667 generic.go:334] "Generic (PLEG): container finished" podID="13f4a06e-6a23-4395-9857-b226e23840a6" containerID="08450554e4d43a35784dfc88189a85ee71928ca5d33ffbee2165f89ab17cdece" exitCode=0 Jan 31 04:39:57 crc kubenswrapper[4667]: I0131 04:39:57.449731 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d2tnc" event={"ID":"13f4a06e-6a23-4395-9857-b226e23840a6","Type":"ContainerDied","Data":"08450554e4d43a35784dfc88189a85ee71928ca5d33ffbee2165f89ab17cdece"} Jan 31 04:39:57 crc kubenswrapper[4667]: I0131 04:39:57.450658 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d2tnc" event={"ID":"13f4a06e-6a23-4395-9857-b226e23840a6","Type":"ContainerStarted","Data":"59bd109c984950736264d6d6a035dc7e4409396eb8021b1c83d422619782a216"} Jan 31 04:39:57 crc kubenswrapper[4667]: I0131 04:39:57.925452 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-v99x9"] Jan 31 04:39:57 crc kubenswrapper[4667]: I0131 04:39:57.928759 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v99x9" Jan 31 04:39:57 crc kubenswrapper[4667]: I0131 04:39:57.947552 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v99x9"] Jan 31 04:39:58 crc kubenswrapper[4667]: I0131 04:39:58.092213 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4tm5\" (UniqueName: \"kubernetes.io/projected/bfe628ed-8650-4bbf-aad3-495baec5d7cb-kube-api-access-n4tm5\") pod \"redhat-marketplace-v99x9\" (UID: \"bfe628ed-8650-4bbf-aad3-495baec5d7cb\") " pod="openshift-marketplace/redhat-marketplace-v99x9" Jan 31 04:39:58 crc kubenswrapper[4667]: I0131 04:39:58.092900 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfe628ed-8650-4bbf-aad3-495baec5d7cb-catalog-content\") pod \"redhat-marketplace-v99x9\" (UID: \"bfe628ed-8650-4bbf-aad3-495baec5d7cb\") " pod="openshift-marketplace/redhat-marketplace-v99x9" Jan 31 04:39:58 crc kubenswrapper[4667]: I0131 04:39:58.092941 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfe628ed-8650-4bbf-aad3-495baec5d7cb-utilities\") pod \"redhat-marketplace-v99x9\" (UID: \"bfe628ed-8650-4bbf-aad3-495baec5d7cb\") " pod="openshift-marketplace/redhat-marketplace-v99x9" Jan 31 04:39:58 crc kubenswrapper[4667]: I0131 04:39:58.195398 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4tm5\" (UniqueName: \"kubernetes.io/projected/bfe628ed-8650-4bbf-aad3-495baec5d7cb-kube-api-access-n4tm5\") pod \"redhat-marketplace-v99x9\" (UID: \"bfe628ed-8650-4bbf-aad3-495baec5d7cb\") " pod="openshift-marketplace/redhat-marketplace-v99x9" Jan 31 04:39:58 crc kubenswrapper[4667]: I0131 04:39:58.195472 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfe628ed-8650-4bbf-aad3-495baec5d7cb-catalog-content\") pod \"redhat-marketplace-v99x9\" (UID: \"bfe628ed-8650-4bbf-aad3-495baec5d7cb\") " pod="openshift-marketplace/redhat-marketplace-v99x9" Jan 31 04:39:58 crc kubenswrapper[4667]: I0131 04:39:58.195504 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfe628ed-8650-4bbf-aad3-495baec5d7cb-utilities\") pod \"redhat-marketplace-v99x9\" (UID: \"bfe628ed-8650-4bbf-aad3-495baec5d7cb\") " pod="openshift-marketplace/redhat-marketplace-v99x9" Jan 31 04:39:58 crc kubenswrapper[4667]: I0131 04:39:58.196138 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfe628ed-8650-4bbf-aad3-495baec5d7cb-utilities\") pod \"redhat-marketplace-v99x9\" (UID: \"bfe628ed-8650-4bbf-aad3-495baec5d7cb\") " pod="openshift-marketplace/redhat-marketplace-v99x9" Jan 31 04:39:58 crc kubenswrapper[4667]: I0131 04:39:58.197140 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfe628ed-8650-4bbf-aad3-495baec5d7cb-catalog-content\") pod \"redhat-marketplace-v99x9\" (UID: \"bfe628ed-8650-4bbf-aad3-495baec5d7cb\") " pod="openshift-marketplace/redhat-marketplace-v99x9" Jan 31 04:39:58 crc kubenswrapper[4667]: I0131 04:39:58.235310 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4tm5\" (UniqueName: \"kubernetes.io/projected/bfe628ed-8650-4bbf-aad3-495baec5d7cb-kube-api-access-n4tm5\") pod \"redhat-marketplace-v99x9\" (UID: \"bfe628ed-8650-4bbf-aad3-495baec5d7cb\") " pod="openshift-marketplace/redhat-marketplace-v99x9" Jan 31 04:39:58 crc kubenswrapper[4667]: I0131 04:39:58.261635 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v99x9" Jan 31 04:39:58 crc kubenswrapper[4667]: I0131 04:39:58.847540 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v99x9"] Jan 31 04:39:59 crc kubenswrapper[4667]: I0131 04:39:59.486125 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d2tnc" event={"ID":"13f4a06e-6a23-4395-9857-b226e23840a6","Type":"ContainerStarted","Data":"9a37244e1423228fd9b78a189ec51452237b367c08f9deda018c0b669fbc2431"} Jan 31 04:39:59 crc kubenswrapper[4667]: I0131 04:39:59.493072 4667 generic.go:334] "Generic (PLEG): container finished" podID="bfe628ed-8650-4bbf-aad3-495baec5d7cb" containerID="30e5e8ac15d6ae01bdfad1d8795c5d681eece3e0b72a1c491addc4f25b8c8e3c" exitCode=0 Jan 31 04:39:59 crc kubenswrapper[4667]: I0131 04:39:59.493130 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v99x9" event={"ID":"bfe628ed-8650-4bbf-aad3-495baec5d7cb","Type":"ContainerDied","Data":"30e5e8ac15d6ae01bdfad1d8795c5d681eece3e0b72a1c491addc4f25b8c8e3c"} Jan 31 04:39:59 crc kubenswrapper[4667]: I0131 04:39:59.493208 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v99x9" event={"ID":"bfe628ed-8650-4bbf-aad3-495baec5d7cb","Type":"ContainerStarted","Data":"9cd68e503fd34d8bf0a5376340ef3d0d0757034d4479f6390ed9f77b35010d62"} Jan 31 04:39:59 crc kubenswrapper[4667]: I0131 04:39:59.495815 4667 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 04:40:01 crc kubenswrapper[4667]: I0131 04:40:01.522567 4667 generic.go:334] "Generic (PLEG): container finished" podID="13f4a06e-6a23-4395-9857-b226e23840a6" containerID="9a37244e1423228fd9b78a189ec51452237b367c08f9deda018c0b669fbc2431" exitCode=0 Jan 31 04:40:01 crc kubenswrapper[4667]: I0131 04:40:01.522707 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d2tnc" event={"ID":"13f4a06e-6a23-4395-9857-b226e23840a6","Type":"ContainerDied","Data":"9a37244e1423228fd9b78a189ec51452237b367c08f9deda018c0b669fbc2431"} Jan 31 04:40:01 crc kubenswrapper[4667]: I0131 04:40:01.528524 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v99x9" event={"ID":"bfe628ed-8650-4bbf-aad3-495baec5d7cb","Type":"ContainerStarted","Data":"fd5c1a7825cc70b6833bf24a11284a6e9c7e71e28b2f8a851daee23db00b1064"} Jan 31 04:40:03 crc kubenswrapper[4667]: I0131 04:40:03.555418 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d2tnc" event={"ID":"13f4a06e-6a23-4395-9857-b226e23840a6","Type":"ContainerStarted","Data":"0ee9218d98958736f1472a611e08a4f752f6614c3f9cf073a22d160c4dace859"} Jan 31 04:40:03 crc kubenswrapper[4667]: I0131 04:40:03.558732 4667 generic.go:334] "Generic (PLEG): container finished" podID="bfe628ed-8650-4bbf-aad3-495baec5d7cb" containerID="fd5c1a7825cc70b6833bf24a11284a6e9c7e71e28b2f8a851daee23db00b1064" exitCode=0 Jan 31 04:40:03 crc kubenswrapper[4667]: I0131 04:40:03.558753 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v99x9" event={"ID":"bfe628ed-8650-4bbf-aad3-495baec5d7cb","Type":"ContainerDied","Data":"fd5c1a7825cc70b6833bf24a11284a6e9c7e71e28b2f8a851daee23db00b1064"} Jan 31 04:40:03 crc kubenswrapper[4667]: I0131 04:40:03.585395 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d2tnc" podStartSLOduration=2.945430361 podStartE2EDuration="8.585374715s" podCreationTimestamp="2026-01-31 04:39:55 +0000 UTC" firstStartedPulling="2026-01-31 04:39:57.452137802 +0000 UTC m=+3120.968473111" lastFinishedPulling="2026-01-31 04:40:03.092082156 +0000 UTC m=+3126.608417465" observedRunningTime="2026-01-31 04:40:03.582424748 +0000 UTC m=+3127.098760047" watchObservedRunningTime="2026-01-31 04:40:03.585374715 +0000 UTC m=+3127.101710014" Jan 31 04:40:04 crc kubenswrapper[4667]: I0131 04:40:04.570295 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v99x9" event={"ID":"bfe628ed-8650-4bbf-aad3-495baec5d7cb","Type":"ContainerStarted","Data":"c877552106dfb26c11efb6ebd2466a2cccdabab4a697c98a5da41fc05bd87056"} Jan 31 04:40:04 crc kubenswrapper[4667]: I0131 04:40:04.603354 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-v99x9" podStartSLOduration=3.105679103 podStartE2EDuration="7.603332636s" podCreationTimestamp="2026-01-31 04:39:57 +0000 UTC" firstStartedPulling="2026-01-31 04:39:59.49557063 +0000 UTC m=+3123.011905929" lastFinishedPulling="2026-01-31 04:40:03.993224163 +0000 UTC m=+3127.509559462" observedRunningTime="2026-01-31 04:40:04.595351357 +0000 UTC m=+3128.111686656" watchObservedRunningTime="2026-01-31 04:40:04.603332636 +0000 UTC m=+3128.119667925" Jan 31 04:40:05 crc kubenswrapper[4667]: I0131 04:40:05.703275 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d2tnc" Jan 31 04:40:05 crc kubenswrapper[4667]: I0131 04:40:05.703814 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d2tnc" Jan 31 04:40:06 crc kubenswrapper[4667]: I0131 04:40:06.754884 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-d2tnc" podUID="13f4a06e-6a23-4395-9857-b226e23840a6" containerName="registry-server" probeResult="failure" output=< Jan 31 04:40:06 crc kubenswrapper[4667]: timeout: failed to connect service ":50051" within 1s Jan 31 04:40:06 crc kubenswrapper[4667]: > Jan 31 04:40:08 crc kubenswrapper[4667]: I0131 04:40:08.262713 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-v99x9" Jan 31 04:40:08 crc kubenswrapper[4667]: I0131 04:40:08.263298 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-v99x9" Jan 31 04:40:08 crc kubenswrapper[4667]: I0131 04:40:08.347513 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-v99x9" Jan 31 04:40:15 crc kubenswrapper[4667]: I0131 04:40:15.704311 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:40:15 crc kubenswrapper[4667]: I0131 04:40:15.704828 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:40:15 crc kubenswrapper[4667]: I0131 04:40:15.704894 4667 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" Jan 31 04:40:15 crc kubenswrapper[4667]: I0131 04:40:15.705835 4667 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4a024f5baac5fed9fbfd2275beffa02329fe0d56f2f10277fb0cd58a753b185a"} pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 04:40:15 crc kubenswrapper[4667]: I0131 04:40:15.705901 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" containerID="cri-o://4a024f5baac5fed9fbfd2275beffa02329fe0d56f2f10277fb0cd58a753b185a" gracePeriod=600 Jan 31 04:40:15 crc kubenswrapper[4667]: I0131 04:40:15.760139 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d2tnc" Jan 31 04:40:15 crc kubenswrapper[4667]: I0131 04:40:15.817891 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d2tnc" Jan 31 04:40:15 crc kubenswrapper[4667]: E0131 04:40:15.866081 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:40:16 crc kubenswrapper[4667]: I0131 04:40:16.002006 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d2tnc"] Jan 31 04:40:16 crc kubenswrapper[4667]: I0131 04:40:16.697975 4667 generic.go:334] "Generic (PLEG): container finished" podID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerID="4a024f5baac5fed9fbfd2275beffa02329fe0d56f2f10277fb0cd58a753b185a" exitCode=0 Jan 31 04:40:16 crc kubenswrapper[4667]: I0131 04:40:16.698090 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" event={"ID":"b103bbd2-fb5d-4b2a-8b01-c32f699757df","Type":"ContainerDied","Data":"4a024f5baac5fed9fbfd2275beffa02329fe0d56f2f10277fb0cd58a753b185a"} Jan 31 04:40:16 crc kubenswrapper[4667]: I0131 04:40:16.698653 4667 scope.go:117] "RemoveContainer" containerID="419bf0d7a3fec41291c94c017831c9dce37119879fd8ba11b94215270b777f4e" Jan 31 04:40:16 crc kubenswrapper[4667]: I0131 04:40:16.700175 4667 scope.go:117] "RemoveContainer" containerID="4a024f5baac5fed9fbfd2275beffa02329fe0d56f2f10277fb0cd58a753b185a" Jan 31 04:40:16 crc kubenswrapper[4667]: E0131 04:40:16.702819 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:40:17 crc kubenswrapper[4667]: I0131 04:40:17.710660 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-d2tnc" podUID="13f4a06e-6a23-4395-9857-b226e23840a6" containerName="registry-server" containerID="cri-o://0ee9218d98958736f1472a611e08a4f752f6614c3f9cf073a22d160c4dace859" gracePeriod=2 Jan 31 04:40:18 crc kubenswrapper[4667]: I0131 04:40:18.350725 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-v99x9" Jan 31 04:40:18 crc kubenswrapper[4667]: I0131 04:40:18.413395 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d2tnc" Jan 31 04:40:18 crc kubenswrapper[4667]: I0131 04:40:18.429002 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v99x9"] Jan 31 04:40:18 crc kubenswrapper[4667]: I0131 04:40:18.508400 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13f4a06e-6a23-4395-9857-b226e23840a6-utilities\") pod \"13f4a06e-6a23-4395-9857-b226e23840a6\" (UID: \"13f4a06e-6a23-4395-9857-b226e23840a6\") " Jan 31 04:40:18 crc kubenswrapper[4667]: I0131 04:40:18.508448 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13f4a06e-6a23-4395-9857-b226e23840a6-catalog-content\") pod \"13f4a06e-6a23-4395-9857-b226e23840a6\" (UID: \"13f4a06e-6a23-4395-9857-b226e23840a6\") " Jan 31 04:40:18 crc kubenswrapper[4667]: I0131 04:40:18.508647 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xndl\" (UniqueName: \"kubernetes.io/projected/13f4a06e-6a23-4395-9857-b226e23840a6-kube-api-access-8xndl\") pod \"13f4a06e-6a23-4395-9857-b226e23840a6\" (UID: \"13f4a06e-6a23-4395-9857-b226e23840a6\") " Jan 31 04:40:18 crc kubenswrapper[4667]: I0131 04:40:18.515290 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13f4a06e-6a23-4395-9857-b226e23840a6-utilities" (OuterVolumeSpecName: "utilities") pod "13f4a06e-6a23-4395-9857-b226e23840a6" (UID: "13f4a06e-6a23-4395-9857-b226e23840a6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:40:18 crc kubenswrapper[4667]: I0131 04:40:18.520517 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13f4a06e-6a23-4395-9857-b226e23840a6-kube-api-access-8xndl" (OuterVolumeSpecName: "kube-api-access-8xndl") pod "13f4a06e-6a23-4395-9857-b226e23840a6" (UID: "13f4a06e-6a23-4395-9857-b226e23840a6"). InnerVolumeSpecName "kube-api-access-8xndl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:40:18 crc kubenswrapper[4667]: I0131 04:40:18.579809 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13f4a06e-6a23-4395-9857-b226e23840a6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "13f4a06e-6a23-4395-9857-b226e23840a6" (UID: "13f4a06e-6a23-4395-9857-b226e23840a6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:40:18 crc kubenswrapper[4667]: I0131 04:40:18.613462 4667 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13f4a06e-6a23-4395-9857-b226e23840a6-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:40:18 crc kubenswrapper[4667]: I0131 04:40:18.613782 4667 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13f4a06e-6a23-4395-9857-b226e23840a6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:40:18 crc kubenswrapper[4667]: I0131 04:40:18.613983 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xndl\" (UniqueName: \"kubernetes.io/projected/13f4a06e-6a23-4395-9857-b226e23840a6-kube-api-access-8xndl\") on node \"crc\" DevicePath \"\"" Jan 31 04:40:18 crc kubenswrapper[4667]: I0131 04:40:18.722634 4667 generic.go:334] "Generic (PLEG): container finished" podID="13f4a06e-6a23-4395-9857-b226e23840a6" containerID="0ee9218d98958736f1472a611e08a4f752f6614c3f9cf073a22d160c4dace859" exitCode=0 Jan 31 04:40:18 crc kubenswrapper[4667]: I0131 04:40:18.722945 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-v99x9" podUID="bfe628ed-8650-4bbf-aad3-495baec5d7cb" containerName="registry-server" containerID="cri-o://c877552106dfb26c11efb6ebd2466a2cccdabab4a697c98a5da41fc05bd87056" gracePeriod=2 Jan 31 04:40:18 crc kubenswrapper[4667]: I0131 04:40:18.722997 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d2tnc" Jan 31 04:40:18 crc kubenswrapper[4667]: I0131 04:40:18.723006 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d2tnc" event={"ID":"13f4a06e-6a23-4395-9857-b226e23840a6","Type":"ContainerDied","Data":"0ee9218d98958736f1472a611e08a4f752f6614c3f9cf073a22d160c4dace859"} Jan 31 04:40:18 crc kubenswrapper[4667]: I0131 04:40:18.723097 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d2tnc" event={"ID":"13f4a06e-6a23-4395-9857-b226e23840a6","Type":"ContainerDied","Data":"59bd109c984950736264d6d6a035dc7e4409396eb8021b1c83d422619782a216"} Jan 31 04:40:18 crc kubenswrapper[4667]: I0131 04:40:18.723129 4667 scope.go:117] "RemoveContainer" containerID="0ee9218d98958736f1472a611e08a4f752f6614c3f9cf073a22d160c4dace859" Jan 31 04:40:18 crc kubenswrapper[4667]: I0131 04:40:18.765080 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d2tnc"] Jan 31 04:40:18 crc kubenswrapper[4667]: I0131 04:40:18.767374 4667 scope.go:117] "RemoveContainer" containerID="9a37244e1423228fd9b78a189ec51452237b367c08f9deda018c0b669fbc2431" Jan 31 04:40:18 crc kubenswrapper[4667]: I0131 04:40:18.778977 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-d2tnc"] Jan 31 04:40:18 crc kubenswrapper[4667]: I0131 04:40:18.839120 4667 scope.go:117] "RemoveContainer" containerID="08450554e4d43a35784dfc88189a85ee71928ca5d33ffbee2165f89ab17cdece" Jan 31 04:40:18 crc kubenswrapper[4667]: I0131 04:40:18.929047 4667 scope.go:117] "RemoveContainer" containerID="0ee9218d98958736f1472a611e08a4f752f6614c3f9cf073a22d160c4dace859" Jan 31 04:40:18 crc kubenswrapper[4667]: E0131 04:40:18.929500 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ee9218d98958736f1472a611e08a4f752f6614c3f9cf073a22d160c4dace859\": container with ID starting with 0ee9218d98958736f1472a611e08a4f752f6614c3f9cf073a22d160c4dace859 not found: ID does not exist" containerID="0ee9218d98958736f1472a611e08a4f752f6614c3f9cf073a22d160c4dace859" Jan 31 04:40:18 crc kubenswrapper[4667]: I0131 04:40:18.929560 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ee9218d98958736f1472a611e08a4f752f6614c3f9cf073a22d160c4dace859"} err="failed to get container status \"0ee9218d98958736f1472a611e08a4f752f6614c3f9cf073a22d160c4dace859\": rpc error: code = NotFound desc = could not find container \"0ee9218d98958736f1472a611e08a4f752f6614c3f9cf073a22d160c4dace859\": container with ID starting with 0ee9218d98958736f1472a611e08a4f752f6614c3f9cf073a22d160c4dace859 not found: ID does not exist" Jan 31 04:40:18 crc kubenswrapper[4667]: I0131 04:40:18.929597 4667 scope.go:117] "RemoveContainer" containerID="9a37244e1423228fd9b78a189ec51452237b367c08f9deda018c0b669fbc2431" Jan 31 04:40:18 crc kubenswrapper[4667]: E0131 04:40:18.929895 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a37244e1423228fd9b78a189ec51452237b367c08f9deda018c0b669fbc2431\": container with ID starting with 9a37244e1423228fd9b78a189ec51452237b367c08f9deda018c0b669fbc2431 not found: ID does not exist" containerID="9a37244e1423228fd9b78a189ec51452237b367c08f9deda018c0b669fbc2431" Jan 31 04:40:18 crc kubenswrapper[4667]: I0131 04:40:18.929934 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a37244e1423228fd9b78a189ec51452237b367c08f9deda018c0b669fbc2431"} err="failed to get container status \"9a37244e1423228fd9b78a189ec51452237b367c08f9deda018c0b669fbc2431\": rpc error: code = NotFound desc = could not find container \"9a37244e1423228fd9b78a189ec51452237b367c08f9deda018c0b669fbc2431\": container with ID starting with 9a37244e1423228fd9b78a189ec51452237b367c08f9deda018c0b669fbc2431 not found: ID does not exist" Jan 31 04:40:18 crc kubenswrapper[4667]: I0131 04:40:18.929957 4667 scope.go:117] "RemoveContainer" containerID="08450554e4d43a35784dfc88189a85ee71928ca5d33ffbee2165f89ab17cdece" Jan 31 04:40:18 crc kubenswrapper[4667]: E0131 04:40:18.930640 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08450554e4d43a35784dfc88189a85ee71928ca5d33ffbee2165f89ab17cdece\": container with ID starting with 08450554e4d43a35784dfc88189a85ee71928ca5d33ffbee2165f89ab17cdece not found: ID does not exist" containerID="08450554e4d43a35784dfc88189a85ee71928ca5d33ffbee2165f89ab17cdece" Jan 31 04:40:18 crc kubenswrapper[4667]: I0131 04:40:18.930682 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08450554e4d43a35784dfc88189a85ee71928ca5d33ffbee2165f89ab17cdece"} err="failed to get container status \"08450554e4d43a35784dfc88189a85ee71928ca5d33ffbee2165f89ab17cdece\": rpc error: code = NotFound desc = could not find container \"08450554e4d43a35784dfc88189a85ee71928ca5d33ffbee2165f89ab17cdece\": container with ID starting with 08450554e4d43a35784dfc88189a85ee71928ca5d33ffbee2165f89ab17cdece not found: ID does not exist" Jan 31 04:40:19 crc kubenswrapper[4667]: I0131 04:40:19.292501 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13f4a06e-6a23-4395-9857-b226e23840a6" path="/var/lib/kubelet/pods/13f4a06e-6a23-4395-9857-b226e23840a6/volumes" Jan 31 04:40:19 crc kubenswrapper[4667]: I0131 04:40:19.338482 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v99x9" Jan 31 04:40:19 crc kubenswrapper[4667]: I0131 04:40:19.429652 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfe628ed-8650-4bbf-aad3-495baec5d7cb-utilities\") pod \"bfe628ed-8650-4bbf-aad3-495baec5d7cb\" (UID: \"bfe628ed-8650-4bbf-aad3-495baec5d7cb\") " Jan 31 04:40:19 crc kubenswrapper[4667]: I0131 04:40:19.430183 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfe628ed-8650-4bbf-aad3-495baec5d7cb-catalog-content\") pod \"bfe628ed-8650-4bbf-aad3-495baec5d7cb\" (UID: \"bfe628ed-8650-4bbf-aad3-495baec5d7cb\") " Jan 31 04:40:19 crc kubenswrapper[4667]: I0131 04:40:19.430554 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4tm5\" (UniqueName: \"kubernetes.io/projected/bfe628ed-8650-4bbf-aad3-495baec5d7cb-kube-api-access-n4tm5\") pod \"bfe628ed-8650-4bbf-aad3-495baec5d7cb\" (UID: \"bfe628ed-8650-4bbf-aad3-495baec5d7cb\") " Jan 31 04:40:19 crc kubenswrapper[4667]: I0131 04:40:19.434412 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfe628ed-8650-4bbf-aad3-495baec5d7cb-utilities" (OuterVolumeSpecName: "utilities") pod "bfe628ed-8650-4bbf-aad3-495baec5d7cb" (UID: "bfe628ed-8650-4bbf-aad3-495baec5d7cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:40:19 crc kubenswrapper[4667]: I0131 04:40:19.437892 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfe628ed-8650-4bbf-aad3-495baec5d7cb-kube-api-access-n4tm5" (OuterVolumeSpecName: "kube-api-access-n4tm5") pod "bfe628ed-8650-4bbf-aad3-495baec5d7cb" (UID: "bfe628ed-8650-4bbf-aad3-495baec5d7cb"). InnerVolumeSpecName "kube-api-access-n4tm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:40:19 crc kubenswrapper[4667]: I0131 04:40:19.466172 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfe628ed-8650-4bbf-aad3-495baec5d7cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bfe628ed-8650-4bbf-aad3-495baec5d7cb" (UID: "bfe628ed-8650-4bbf-aad3-495baec5d7cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:40:19 crc kubenswrapper[4667]: I0131 04:40:19.533459 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4tm5\" (UniqueName: \"kubernetes.io/projected/bfe628ed-8650-4bbf-aad3-495baec5d7cb-kube-api-access-n4tm5\") on node \"crc\" DevicePath \"\"" Jan 31 04:40:19 crc kubenswrapper[4667]: I0131 04:40:19.533507 4667 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfe628ed-8650-4bbf-aad3-495baec5d7cb-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:40:19 crc kubenswrapper[4667]: I0131 04:40:19.533523 4667 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfe628ed-8650-4bbf-aad3-495baec5d7cb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:40:19 crc kubenswrapper[4667]: I0131 04:40:19.744621 4667 generic.go:334] "Generic (PLEG): container finished" podID="bfe628ed-8650-4bbf-aad3-495baec5d7cb" containerID="c877552106dfb26c11efb6ebd2466a2cccdabab4a697c98a5da41fc05bd87056" exitCode=0 Jan 31 04:40:19 crc kubenswrapper[4667]: I0131 04:40:19.744913 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v99x9" event={"ID":"bfe628ed-8650-4bbf-aad3-495baec5d7cb","Type":"ContainerDied","Data":"c877552106dfb26c11efb6ebd2466a2cccdabab4a697c98a5da41fc05bd87056"} Jan 31 04:40:19 crc kubenswrapper[4667]: I0131 04:40:19.744973 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v99x9" event={"ID":"bfe628ed-8650-4bbf-aad3-495baec5d7cb","Type":"ContainerDied","Data":"9cd68e503fd34d8bf0a5376340ef3d0d0757034d4479f6390ed9f77b35010d62"} Jan 31 04:40:19 crc kubenswrapper[4667]: I0131 04:40:19.744995 4667 scope.go:117] "RemoveContainer" containerID="c877552106dfb26c11efb6ebd2466a2cccdabab4a697c98a5da41fc05bd87056" Jan 31 04:40:19 crc kubenswrapper[4667]: I0131 04:40:19.745345 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v99x9" Jan 31 04:40:19 crc kubenswrapper[4667]: I0131 04:40:19.797046 4667 scope.go:117] "RemoveContainer" containerID="fd5c1a7825cc70b6833bf24a11284a6e9c7e71e28b2f8a851daee23db00b1064" Jan 31 04:40:19 crc kubenswrapper[4667]: I0131 04:40:19.798664 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v99x9"] Jan 31 04:40:19 crc kubenswrapper[4667]: I0131 04:40:19.808006 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-v99x9"] Jan 31 04:40:19 crc kubenswrapper[4667]: I0131 04:40:19.835610 4667 scope.go:117] "RemoveContainer" containerID="30e5e8ac15d6ae01bdfad1d8795c5d681eece3e0b72a1c491addc4f25b8c8e3c" Jan 31 04:40:19 crc kubenswrapper[4667]: I0131 04:40:19.857046 4667 scope.go:117] "RemoveContainer" containerID="c877552106dfb26c11efb6ebd2466a2cccdabab4a697c98a5da41fc05bd87056" Jan 31 04:40:19 crc kubenswrapper[4667]: E0131 04:40:19.857616 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c877552106dfb26c11efb6ebd2466a2cccdabab4a697c98a5da41fc05bd87056\": container with ID starting with c877552106dfb26c11efb6ebd2466a2cccdabab4a697c98a5da41fc05bd87056 not found: ID does not exist" containerID="c877552106dfb26c11efb6ebd2466a2cccdabab4a697c98a5da41fc05bd87056" Jan 31 04:40:19 crc kubenswrapper[4667]: I0131 04:40:19.857670 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c877552106dfb26c11efb6ebd2466a2cccdabab4a697c98a5da41fc05bd87056"} err="failed to get container status \"c877552106dfb26c11efb6ebd2466a2cccdabab4a697c98a5da41fc05bd87056\": rpc error: code = NotFound desc = could not find container \"c877552106dfb26c11efb6ebd2466a2cccdabab4a697c98a5da41fc05bd87056\": container with ID starting with c877552106dfb26c11efb6ebd2466a2cccdabab4a697c98a5da41fc05bd87056 not found: ID does not exist" Jan 31 04:40:19 crc kubenswrapper[4667]: I0131 04:40:19.857708 4667 scope.go:117] "RemoveContainer" containerID="fd5c1a7825cc70b6833bf24a11284a6e9c7e71e28b2f8a851daee23db00b1064" Jan 31 04:40:19 crc kubenswrapper[4667]: E0131 04:40:19.858097 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd5c1a7825cc70b6833bf24a11284a6e9c7e71e28b2f8a851daee23db00b1064\": container with ID starting with fd5c1a7825cc70b6833bf24a11284a6e9c7e71e28b2f8a851daee23db00b1064 not found: ID does not exist" containerID="fd5c1a7825cc70b6833bf24a11284a6e9c7e71e28b2f8a851daee23db00b1064" Jan 31 04:40:19 crc kubenswrapper[4667]: I0131 04:40:19.858128 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd5c1a7825cc70b6833bf24a11284a6e9c7e71e28b2f8a851daee23db00b1064"} err="failed to get container status \"fd5c1a7825cc70b6833bf24a11284a6e9c7e71e28b2f8a851daee23db00b1064\": rpc error: code = NotFound desc = could not find container \"fd5c1a7825cc70b6833bf24a11284a6e9c7e71e28b2f8a851daee23db00b1064\": container with ID starting with fd5c1a7825cc70b6833bf24a11284a6e9c7e71e28b2f8a851daee23db00b1064 not found: ID does not exist" Jan 31 04:40:19 crc kubenswrapper[4667]: I0131 04:40:19.858148 4667 scope.go:117] "RemoveContainer" containerID="30e5e8ac15d6ae01bdfad1d8795c5d681eece3e0b72a1c491addc4f25b8c8e3c" Jan 31 04:40:19 crc kubenswrapper[4667]: E0131 04:40:19.858346 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30e5e8ac15d6ae01bdfad1d8795c5d681eece3e0b72a1c491addc4f25b8c8e3c\": container with ID starting with 30e5e8ac15d6ae01bdfad1d8795c5d681eece3e0b72a1c491addc4f25b8c8e3c not found: ID does not exist" containerID="30e5e8ac15d6ae01bdfad1d8795c5d681eece3e0b72a1c491addc4f25b8c8e3c" Jan 31 04:40:19 crc kubenswrapper[4667]: I0131 04:40:19.858370 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30e5e8ac15d6ae01bdfad1d8795c5d681eece3e0b72a1c491addc4f25b8c8e3c"} err="failed to get container status \"30e5e8ac15d6ae01bdfad1d8795c5d681eece3e0b72a1c491addc4f25b8c8e3c\": rpc error: code = NotFound desc = could not find container \"30e5e8ac15d6ae01bdfad1d8795c5d681eece3e0b72a1c491addc4f25b8c8e3c\": container with ID starting with 30e5e8ac15d6ae01bdfad1d8795c5d681eece3e0b72a1c491addc4f25b8c8e3c not found: ID does not exist" Jan 31 04:40:21 crc kubenswrapper[4667]: I0131 04:40:21.294723 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfe628ed-8650-4bbf-aad3-495baec5d7cb" path="/var/lib/kubelet/pods/bfe628ed-8650-4bbf-aad3-495baec5d7cb/volumes" Jan 31 04:40:30 crc kubenswrapper[4667]: I0131 04:40:30.282711 4667 scope.go:117] "RemoveContainer" containerID="4a024f5baac5fed9fbfd2275beffa02329fe0d56f2f10277fb0cd58a753b185a" Jan 31 04:40:30 crc kubenswrapper[4667]: E0131 04:40:30.283570 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:40:44 crc kubenswrapper[4667]: I0131 04:40:44.282668 4667 scope.go:117] "RemoveContainer" containerID="4a024f5baac5fed9fbfd2275beffa02329fe0d56f2f10277fb0cd58a753b185a" Jan 31 04:40:44 crc kubenswrapper[4667]: E0131 04:40:44.283584 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:40:56 crc kubenswrapper[4667]: I0131 04:40:56.282390 4667 scope.go:117] "RemoveContainer" containerID="4a024f5baac5fed9fbfd2275beffa02329fe0d56f2f10277fb0cd58a753b185a" Jan 31 04:40:56 crc kubenswrapper[4667]: E0131 04:40:56.283758 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:41:10 crc kubenswrapper[4667]: I0131 04:41:10.282055 4667 scope.go:117] "RemoveContainer" containerID="4a024f5baac5fed9fbfd2275beffa02329fe0d56f2f10277fb0cd58a753b185a" Jan 31 04:41:10 crc kubenswrapper[4667]: E0131 04:41:10.283145 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:41:25 crc kubenswrapper[4667]: I0131 04:41:25.283323 4667 scope.go:117] "RemoveContainer" containerID="4a024f5baac5fed9fbfd2275beffa02329fe0d56f2f10277fb0cd58a753b185a" Jan 31 04:41:25 crc kubenswrapper[4667]: E0131 04:41:25.284312 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:41:38 crc kubenswrapper[4667]: I0131 04:41:38.282955 4667 scope.go:117] "RemoveContainer" containerID="4a024f5baac5fed9fbfd2275beffa02329fe0d56f2f10277fb0cd58a753b185a" Jan 31 04:41:38 crc kubenswrapper[4667]: E0131 04:41:38.284021 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:41:50 crc kubenswrapper[4667]: I0131 04:41:50.282252 4667 scope.go:117] "RemoveContainer" containerID="4a024f5baac5fed9fbfd2275beffa02329fe0d56f2f10277fb0cd58a753b185a" Jan 31 04:41:50 crc kubenswrapper[4667]: E0131 04:41:50.284016 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:42:02 crc kubenswrapper[4667]: I0131 04:42:02.282798 4667 scope.go:117] "RemoveContainer" containerID="4a024f5baac5fed9fbfd2275beffa02329fe0d56f2f10277fb0cd58a753b185a" Jan 31 04:42:02 crc kubenswrapper[4667]: E0131 04:42:02.283584 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:42:16 crc kubenswrapper[4667]: I0131 04:42:16.283229 4667 scope.go:117] "RemoveContainer" containerID="4a024f5baac5fed9fbfd2275beffa02329fe0d56f2f10277fb0cd58a753b185a" Jan 31 04:42:16 crc kubenswrapper[4667]: E0131 04:42:16.285087 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:42:28 crc kubenswrapper[4667]: I0131 04:42:28.283468 4667 scope.go:117] "RemoveContainer" containerID="4a024f5baac5fed9fbfd2275beffa02329fe0d56f2f10277fb0cd58a753b185a" Jan 31 04:42:28 crc kubenswrapper[4667]: E0131 04:42:28.284755 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:42:34 crc kubenswrapper[4667]: I0131 04:42:34.319106 4667 generic.go:334] "Generic (PLEG): container finished" podID="6f4da9b8-1fb2-4d7c-b933-d5749919e9d1" containerID="2d2dc467d30094a30d699ff8db5f32b1d7fa9889db4f92a313c5bbd913b32f84" exitCode=0 Jan 31 04:42:34 crc kubenswrapper[4667]: I0131 04:42:34.319237 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1","Type":"ContainerDied","Data":"2d2dc467d30094a30d699ff8db5f32b1d7fa9889db4f92a313c5bbd913b32f84"} Jan 31 04:42:35 crc kubenswrapper[4667]: I0131 04:42:35.751335 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 31 04:42:35 crc kubenswrapper[4667]: I0131 04:42:35.844566 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-ssh-key\") pod \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\" (UID: \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\") " Jan 31 04:42:35 crc kubenswrapper[4667]: I0131 04:42:35.845122 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\" (UID: \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\") " Jan 31 04:42:35 crc kubenswrapper[4667]: I0131 04:42:35.845262 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-test-operator-ephemeral-temporary\") pod \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\" (UID: \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\") " Jan 31 04:42:35 crc kubenswrapper[4667]: I0131 04:42:35.845490 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-openstack-config\") pod \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\" (UID: \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\") " Jan 31 04:42:35 crc kubenswrapper[4667]: I0131 04:42:35.845630 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-config-data\") pod \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\" (UID: \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\") " Jan 31 04:42:35 crc kubenswrapper[4667]: I0131 04:42:35.845899 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-ca-certs\") pod \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\" (UID: \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\") " Jan 31 04:42:35 crc kubenswrapper[4667]: I0131 04:42:35.846046 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knzdb\" (UniqueName: \"kubernetes.io/projected/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-kube-api-access-knzdb\") pod \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\" (UID: \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\") " Jan 31 04:42:35 crc kubenswrapper[4667]: I0131 04:42:35.846153 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-test-operator-ephemeral-workdir\") pod \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\" (UID: \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\") " Jan 31 04:42:35 crc kubenswrapper[4667]: I0131 04:42:35.846352 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-openstack-config-secret\") pod \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\" (UID: \"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1\") " Jan 31 04:42:35 crc kubenswrapper[4667]: I0131 04:42:35.846583 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "6f4da9b8-1fb2-4d7c-b933-d5749919e9d1" (UID: "6f4da9b8-1fb2-4d7c-b933-d5749919e9d1"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:42:35 crc kubenswrapper[4667]: I0131 04:42:35.846943 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-config-data" (OuterVolumeSpecName: "config-data") pod "6f4da9b8-1fb2-4d7c-b933-d5749919e9d1" (UID: "6f4da9b8-1fb2-4d7c-b933-d5749919e9d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:42:35 crc kubenswrapper[4667]: I0131 04:42:35.855413 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "6f4da9b8-1fb2-4d7c-b933-d5749919e9d1" (UID: "6f4da9b8-1fb2-4d7c-b933-d5749919e9d1"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:42:35 crc kubenswrapper[4667]: I0131 04:42:35.861241 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-kube-api-access-knzdb" (OuterVolumeSpecName: "kube-api-access-knzdb") pod "6f4da9b8-1fb2-4d7c-b933-d5749919e9d1" (UID: "6f4da9b8-1fb2-4d7c-b933-d5749919e9d1"). InnerVolumeSpecName "kube-api-access-knzdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:42:35 crc kubenswrapper[4667]: I0131 04:42:35.861281 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "test-operator-logs") pod "6f4da9b8-1fb2-4d7c-b933-d5749919e9d1" (UID: "6f4da9b8-1fb2-4d7c-b933-d5749919e9d1"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 04:42:35 crc kubenswrapper[4667]: I0131 04:42:35.879682 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6f4da9b8-1fb2-4d7c-b933-d5749919e9d1" (UID: "6f4da9b8-1fb2-4d7c-b933-d5749919e9d1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:42:35 crc kubenswrapper[4667]: I0131 04:42:35.880256 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "6f4da9b8-1fb2-4d7c-b933-d5749919e9d1" (UID: "6f4da9b8-1fb2-4d7c-b933-d5749919e9d1"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:42:35 crc kubenswrapper[4667]: I0131 04:42:35.882355 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "6f4da9b8-1fb2-4d7c-b933-d5749919e9d1" (UID: "6f4da9b8-1fb2-4d7c-b933-d5749919e9d1"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:42:35 crc kubenswrapper[4667]: I0131 04:42:35.903391 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "6f4da9b8-1fb2-4d7c-b933-d5749919e9d1" (UID: "6f4da9b8-1fb2-4d7c-b933-d5749919e9d1"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:42:35 crc kubenswrapper[4667]: I0131 04:42:35.948295 4667 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 31 04:42:35 crc kubenswrapper[4667]: I0131 04:42:35.948330 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knzdb\" (UniqueName: \"kubernetes.io/projected/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-kube-api-access-knzdb\") on node \"crc\" DevicePath \"\"" Jan 31 04:42:35 crc kubenswrapper[4667]: I0131 04:42:35.948361 4667 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 31 04:42:35 crc kubenswrapper[4667]: I0131 04:42:35.948374 4667 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 31 04:42:35 crc kubenswrapper[4667]: I0131 04:42:35.948384 4667 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 31 04:42:35 crc kubenswrapper[4667]: I0131 04:42:35.948953 4667 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 31 04:42:35 crc kubenswrapper[4667]: I0131 04:42:35.948982 4667 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 31 04:42:35 crc kubenswrapper[4667]: I0131 04:42:35.948996 4667 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 31 04:42:35 crc kubenswrapper[4667]: I0131 04:42:35.949015 4667 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6f4da9b8-1fb2-4d7c-b933-d5749919e9d1-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 04:42:35 crc kubenswrapper[4667]: I0131 04:42:35.972534 4667 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 31 04:42:36 crc kubenswrapper[4667]: I0131 04:42:36.051124 4667 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 31 04:42:36 crc kubenswrapper[4667]: I0131 04:42:36.349389 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"6f4da9b8-1fb2-4d7c-b933-d5749919e9d1","Type":"ContainerDied","Data":"2d44fd6a9b1a57ec87ea8058a01d9bed0bf8581e40a1b515cd5aeefa586693ce"} Jan 31 04:42:36 crc kubenswrapper[4667]: I0131 04:42:36.349440 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 31 04:42:36 crc kubenswrapper[4667]: I0131 04:42:36.349455 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d44fd6a9b1a57ec87ea8058a01d9bed0bf8581e40a1b515cd5aeefa586693ce" Jan 31 04:42:40 crc kubenswrapper[4667]: I0131 04:42:40.281570 4667 scope.go:117] "RemoveContainer" containerID="4a024f5baac5fed9fbfd2275beffa02329fe0d56f2f10277fb0cd58a753b185a" Jan 31 04:42:40 crc kubenswrapper[4667]: E0131 04:42:40.282318 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:42:44 crc kubenswrapper[4667]: I0131 04:42:44.814470 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 31 04:42:44 crc kubenswrapper[4667]: E0131 04:42:44.816056 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfe628ed-8650-4bbf-aad3-495baec5d7cb" containerName="extract-content" Jan 31 04:42:44 crc kubenswrapper[4667]: I0131 04:42:44.816083 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfe628ed-8650-4bbf-aad3-495baec5d7cb" containerName="extract-content" Jan 31 04:42:44 crc kubenswrapper[4667]: E0131 04:42:44.816123 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13f4a06e-6a23-4395-9857-b226e23840a6" containerName="extract-content" Jan 31 04:42:44 crc kubenswrapper[4667]: I0131 04:42:44.816139 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="13f4a06e-6a23-4395-9857-b226e23840a6" containerName="extract-content" Jan 31 04:42:44 crc kubenswrapper[4667]: E0131 04:42:44.816199 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13f4a06e-6a23-4395-9857-b226e23840a6" containerName="registry-server" Jan 31 04:42:44 crc kubenswrapper[4667]: I0131 04:42:44.816217 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="13f4a06e-6a23-4395-9857-b226e23840a6" containerName="registry-server" Jan 31 04:42:44 crc kubenswrapper[4667]: E0131 04:42:44.816251 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfe628ed-8650-4bbf-aad3-495baec5d7cb" containerName="registry-server" Jan 31 04:42:44 crc kubenswrapper[4667]: I0131 04:42:44.816266 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfe628ed-8650-4bbf-aad3-495baec5d7cb" containerName="registry-server" Jan 31 04:42:44 crc kubenswrapper[4667]: E0131 04:42:44.816289 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f4da9b8-1fb2-4d7c-b933-d5749919e9d1" containerName="tempest-tests-tempest-tests-runner" Jan 31 04:42:44 crc kubenswrapper[4667]: I0131 04:42:44.816304 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f4da9b8-1fb2-4d7c-b933-d5749919e9d1" containerName="tempest-tests-tempest-tests-runner" Jan 31 04:42:44 crc kubenswrapper[4667]: E0131 04:42:44.816321 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfe628ed-8650-4bbf-aad3-495baec5d7cb" containerName="extract-utilities" Jan 31 04:42:44 crc kubenswrapper[4667]: I0131 04:42:44.816336 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfe628ed-8650-4bbf-aad3-495baec5d7cb" containerName="extract-utilities" Jan 31 04:42:44 crc kubenswrapper[4667]: E0131 04:42:44.816418 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13f4a06e-6a23-4395-9857-b226e23840a6" containerName="extract-utilities" Jan 31 04:42:44 crc kubenswrapper[4667]: I0131 04:42:44.816434 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="13f4a06e-6a23-4395-9857-b226e23840a6" containerName="extract-utilities" Jan 31 04:42:44 crc kubenswrapper[4667]: I0131 04:42:44.816934 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="13f4a06e-6a23-4395-9857-b226e23840a6" containerName="registry-server" Jan 31 04:42:44 crc kubenswrapper[4667]: I0131 04:42:44.816963 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfe628ed-8650-4bbf-aad3-495baec5d7cb" containerName="registry-server" Jan 31 04:42:44 crc kubenswrapper[4667]: I0131 04:42:44.816992 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f4da9b8-1fb2-4d7c-b933-d5749919e9d1" containerName="tempest-tests-tempest-tests-runner" Jan 31 04:42:44 crc kubenswrapper[4667]: I0131 04:42:44.818388 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 04:42:44 crc kubenswrapper[4667]: I0131 04:42:44.834077 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 31 04:42:44 crc kubenswrapper[4667]: I0131 04:42:44.862763 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-cnjqs" Jan 31 04:42:45 crc kubenswrapper[4667]: I0131 04:42:45.000048 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkdrh\" (UniqueName: \"kubernetes.io/projected/dd17b156-9377-4bd0-ab7d-80b57f81c79c-kube-api-access-jkdrh\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"dd17b156-9377-4bd0-ab7d-80b57f81c79c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 04:42:45 crc kubenswrapper[4667]: I0131 04:42:45.000629 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"dd17b156-9377-4bd0-ab7d-80b57f81c79c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 04:42:45 crc kubenswrapper[4667]: I0131 04:42:45.102606 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkdrh\" (UniqueName: \"kubernetes.io/projected/dd17b156-9377-4bd0-ab7d-80b57f81c79c-kube-api-access-jkdrh\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"dd17b156-9377-4bd0-ab7d-80b57f81c79c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 04:42:45 crc kubenswrapper[4667]: I0131 04:42:45.102676 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"dd17b156-9377-4bd0-ab7d-80b57f81c79c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 04:42:45 crc kubenswrapper[4667]: I0131 04:42:45.103491 4667 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"dd17b156-9377-4bd0-ab7d-80b57f81c79c\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 04:42:45 crc kubenswrapper[4667]: I0131 04:42:45.132932 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkdrh\" (UniqueName: \"kubernetes.io/projected/dd17b156-9377-4bd0-ab7d-80b57f81c79c-kube-api-access-jkdrh\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"dd17b156-9377-4bd0-ab7d-80b57f81c79c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 04:42:45 crc kubenswrapper[4667]: I0131 04:42:45.136575 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"dd17b156-9377-4bd0-ab7d-80b57f81c79c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 04:42:45 crc kubenswrapper[4667]: I0131 04:42:45.184594 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 31 04:42:45 crc kubenswrapper[4667]: I0131 04:42:45.743109 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 31 04:42:46 crc kubenswrapper[4667]: I0131 04:42:46.482761 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"dd17b156-9377-4bd0-ab7d-80b57f81c79c","Type":"ContainerStarted","Data":"98fc818912ebf88ff934b6c10542260df584e4cf07b41866dcfd409b49b99215"} Jan 31 04:42:47 crc kubenswrapper[4667]: I0131 04:42:47.503981 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"dd17b156-9377-4bd0-ab7d-80b57f81c79c","Type":"ContainerStarted","Data":"345fab961707249a4ca8a237ee563522f1337b2276a9d4fbb8611eda5e911f5a"} Jan 31 04:42:47 crc kubenswrapper[4667]: I0131 04:42:47.527493 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.6006578620000003 podStartE2EDuration="3.527462021s" podCreationTimestamp="2026-01-31 04:42:44 +0000 UTC" firstStartedPulling="2026-01-31 04:42:45.732777517 +0000 UTC m=+3289.249112816" lastFinishedPulling="2026-01-31 04:42:46.659581666 +0000 UTC m=+3290.175916975" observedRunningTime="2026-01-31 04:42:47.525356895 +0000 UTC m=+3291.041692214" watchObservedRunningTime="2026-01-31 04:42:47.527462021 +0000 UTC m=+3291.043797330" Jan 31 04:42:52 crc kubenswrapper[4667]: I0131 04:42:52.284198 4667 scope.go:117] "RemoveContainer" containerID="4a024f5baac5fed9fbfd2275beffa02329fe0d56f2f10277fb0cd58a753b185a" Jan 31 04:42:52 crc kubenswrapper[4667]: E0131 04:42:52.285567 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:43:06 crc kubenswrapper[4667]: I0131 04:43:06.283090 4667 scope.go:117] "RemoveContainer" containerID="4a024f5baac5fed9fbfd2275beffa02329fe0d56f2f10277fb0cd58a753b185a" Jan 31 04:43:06 crc kubenswrapper[4667]: E0131 04:43:06.284613 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:43:07 crc kubenswrapper[4667]: I0131 04:43:07.941460 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4kjf7/must-gather-cc9zj"] Jan 31 04:43:07 crc kubenswrapper[4667]: I0131 04:43:07.947393 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4kjf7/must-gather-cc9zj" Jan 31 04:43:07 crc kubenswrapper[4667]: I0131 04:43:07.965223 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-4kjf7"/"kube-root-ca.crt" Jan 31 04:43:07 crc kubenswrapper[4667]: I0131 04:43:07.965818 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-4kjf7"/"openshift-service-ca.crt" Jan 31 04:43:07 crc kubenswrapper[4667]: I0131 04:43:07.993156 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-4kjf7/must-gather-cc9zj"] Jan 31 04:43:08 crc kubenswrapper[4667]: I0131 04:43:08.077963 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e85995d0-41be-46b4-bb54-bc7e234abbaa-must-gather-output\") pod \"must-gather-cc9zj\" (UID: \"e85995d0-41be-46b4-bb54-bc7e234abbaa\") " pod="openshift-must-gather-4kjf7/must-gather-cc9zj" Jan 31 04:43:08 crc kubenswrapper[4667]: I0131 04:43:08.078318 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkfhn\" (UniqueName: \"kubernetes.io/projected/e85995d0-41be-46b4-bb54-bc7e234abbaa-kube-api-access-nkfhn\") pod \"must-gather-cc9zj\" (UID: \"e85995d0-41be-46b4-bb54-bc7e234abbaa\") " pod="openshift-must-gather-4kjf7/must-gather-cc9zj" Jan 31 04:43:08 crc kubenswrapper[4667]: I0131 04:43:08.180395 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkfhn\" (UniqueName: \"kubernetes.io/projected/e85995d0-41be-46b4-bb54-bc7e234abbaa-kube-api-access-nkfhn\") pod \"must-gather-cc9zj\" (UID: \"e85995d0-41be-46b4-bb54-bc7e234abbaa\") " pod="openshift-must-gather-4kjf7/must-gather-cc9zj" Jan 31 04:43:08 crc kubenswrapper[4667]: I0131 04:43:08.181011 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e85995d0-41be-46b4-bb54-bc7e234abbaa-must-gather-output\") pod \"must-gather-cc9zj\" (UID: \"e85995d0-41be-46b4-bb54-bc7e234abbaa\") " pod="openshift-must-gather-4kjf7/must-gather-cc9zj" Jan 31 04:43:08 crc kubenswrapper[4667]: I0131 04:43:08.181569 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e85995d0-41be-46b4-bb54-bc7e234abbaa-must-gather-output\") pod \"must-gather-cc9zj\" (UID: \"e85995d0-41be-46b4-bb54-bc7e234abbaa\") " pod="openshift-must-gather-4kjf7/must-gather-cc9zj" Jan 31 04:43:08 crc kubenswrapper[4667]: I0131 04:43:08.203743 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkfhn\" (UniqueName: \"kubernetes.io/projected/e85995d0-41be-46b4-bb54-bc7e234abbaa-kube-api-access-nkfhn\") pod \"must-gather-cc9zj\" (UID: \"e85995d0-41be-46b4-bb54-bc7e234abbaa\") " pod="openshift-must-gather-4kjf7/must-gather-cc9zj" Jan 31 04:43:08 crc kubenswrapper[4667]: I0131 04:43:08.271162 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4kjf7/must-gather-cc9zj" Jan 31 04:43:08 crc kubenswrapper[4667]: I0131 04:43:08.731057 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-4kjf7/must-gather-cc9zj"] Jan 31 04:43:09 crc kubenswrapper[4667]: I0131 04:43:09.670857 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4kjf7/must-gather-cc9zj" event={"ID":"e85995d0-41be-46b4-bb54-bc7e234abbaa","Type":"ContainerStarted","Data":"5d7ee84533006d201ced0e0d31c8aaabd6699fa2f4261dde0b0db6869d6e36ae"} Jan 31 04:43:14 crc kubenswrapper[4667]: I0131 04:43:14.744898 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4kjf7/must-gather-cc9zj" event={"ID":"e85995d0-41be-46b4-bb54-bc7e234abbaa","Type":"ContainerStarted","Data":"7638ae809d4c51cf46f318e5ec87f088ad81221328e9dd6f3e7f8fc8b25d621f"} Jan 31 04:43:14 crc kubenswrapper[4667]: I0131 04:43:14.745953 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4kjf7/must-gather-cc9zj" event={"ID":"e85995d0-41be-46b4-bb54-bc7e234abbaa","Type":"ContainerStarted","Data":"67f992d856b9b986303597735ca1c1c154e685d36aa7a3a838b3f171fa1de614"} Jan 31 04:43:14 crc kubenswrapper[4667]: I0131 04:43:14.765416 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-4kjf7/must-gather-cc9zj" podStartSLOduration=2.535776224 podStartE2EDuration="7.765394386s" podCreationTimestamp="2026-01-31 04:43:07 +0000 UTC" firstStartedPulling="2026-01-31 04:43:08.761749381 +0000 UTC m=+3312.278084680" lastFinishedPulling="2026-01-31 04:43:13.991367533 +0000 UTC m=+3317.507702842" observedRunningTime="2026-01-31 04:43:14.764254926 +0000 UTC m=+3318.280590225" watchObservedRunningTime="2026-01-31 04:43:14.765394386 +0000 UTC m=+3318.281729695" Jan 31 04:43:17 crc kubenswrapper[4667]: I0131 04:43:17.290472 4667 scope.go:117] "RemoveContainer" containerID="4a024f5baac5fed9fbfd2275beffa02329fe0d56f2f10277fb0cd58a753b185a" Jan 31 04:43:17 crc kubenswrapper[4667]: E0131 04:43:17.292661 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:43:19 crc kubenswrapper[4667]: I0131 04:43:19.915114 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4kjf7/crc-debug-4285t"] Jan 31 04:43:19 crc kubenswrapper[4667]: I0131 04:43:19.918249 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4kjf7/crc-debug-4285t" Jan 31 04:43:19 crc kubenswrapper[4667]: I0131 04:43:19.921233 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-4kjf7"/"default-dockercfg-wmxq6" Jan 31 04:43:20 crc kubenswrapper[4667]: I0131 04:43:20.007798 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/227ed998-7d50-4a65-9b37-9452bf89d69f-host\") pod \"crc-debug-4285t\" (UID: \"227ed998-7d50-4a65-9b37-9452bf89d69f\") " pod="openshift-must-gather-4kjf7/crc-debug-4285t" Jan 31 04:43:20 crc kubenswrapper[4667]: I0131 04:43:20.007953 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs82t\" (UniqueName: \"kubernetes.io/projected/227ed998-7d50-4a65-9b37-9452bf89d69f-kube-api-access-gs82t\") pod \"crc-debug-4285t\" (UID: \"227ed998-7d50-4a65-9b37-9452bf89d69f\") " pod="openshift-must-gather-4kjf7/crc-debug-4285t" Jan 31 04:43:20 crc kubenswrapper[4667]: I0131 04:43:20.110547 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/227ed998-7d50-4a65-9b37-9452bf89d69f-host\") pod \"crc-debug-4285t\" (UID: \"227ed998-7d50-4a65-9b37-9452bf89d69f\") " pod="openshift-must-gather-4kjf7/crc-debug-4285t" Jan 31 04:43:20 crc kubenswrapper[4667]: I0131 04:43:20.110675 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs82t\" (UniqueName: \"kubernetes.io/projected/227ed998-7d50-4a65-9b37-9452bf89d69f-kube-api-access-gs82t\") pod \"crc-debug-4285t\" (UID: \"227ed998-7d50-4a65-9b37-9452bf89d69f\") " pod="openshift-must-gather-4kjf7/crc-debug-4285t" Jan 31 04:43:20 crc kubenswrapper[4667]: I0131 04:43:20.111088 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/227ed998-7d50-4a65-9b37-9452bf89d69f-host\") pod \"crc-debug-4285t\" (UID: \"227ed998-7d50-4a65-9b37-9452bf89d69f\") " pod="openshift-must-gather-4kjf7/crc-debug-4285t" Jan 31 04:43:20 crc kubenswrapper[4667]: I0131 04:43:20.138191 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs82t\" (UniqueName: \"kubernetes.io/projected/227ed998-7d50-4a65-9b37-9452bf89d69f-kube-api-access-gs82t\") pod \"crc-debug-4285t\" (UID: \"227ed998-7d50-4a65-9b37-9452bf89d69f\") " pod="openshift-must-gather-4kjf7/crc-debug-4285t" Jan 31 04:43:20 crc kubenswrapper[4667]: I0131 04:43:20.239366 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4kjf7/crc-debug-4285t" Jan 31 04:43:20 crc kubenswrapper[4667]: I0131 04:43:20.808928 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4kjf7/crc-debug-4285t" event={"ID":"227ed998-7d50-4a65-9b37-9452bf89d69f","Type":"ContainerStarted","Data":"359e2902ee4d20532183a7576afe3d71e866eda18ddc544380b095178c1a75a9"} Jan 31 04:43:31 crc kubenswrapper[4667]: I0131 04:43:31.282292 4667 scope.go:117] "RemoveContainer" containerID="4a024f5baac5fed9fbfd2275beffa02329fe0d56f2f10277fb0cd58a753b185a" Jan 31 04:43:31 crc kubenswrapper[4667]: E0131 04:43:31.292512 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:43:35 crc kubenswrapper[4667]: I0131 04:43:35.015394 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4kjf7/crc-debug-4285t" event={"ID":"227ed998-7d50-4a65-9b37-9452bf89d69f","Type":"ContainerStarted","Data":"d8197a305767c5325e1d4904e7c29d2b9f1c0c2684a7b7d34611b70958630820"} Jan 31 04:43:35 crc kubenswrapper[4667]: I0131 04:43:35.039949 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-4kjf7/crc-debug-4285t" podStartSLOduration=2.210307374 podStartE2EDuration="16.039926922s" podCreationTimestamp="2026-01-31 04:43:19 +0000 UTC" firstStartedPulling="2026-01-31 04:43:20.277764244 +0000 UTC m=+3323.794099543" lastFinishedPulling="2026-01-31 04:43:34.107383792 +0000 UTC m=+3337.623719091" observedRunningTime="2026-01-31 04:43:35.030158696 +0000 UTC m=+3338.546493995" watchObservedRunningTime="2026-01-31 04:43:35.039926922 +0000 UTC m=+3338.556262221" Jan 31 04:43:45 crc kubenswrapper[4667]: I0131 04:43:45.283169 4667 scope.go:117] "RemoveContainer" containerID="4a024f5baac5fed9fbfd2275beffa02329fe0d56f2f10277fb0cd58a753b185a" Jan 31 04:43:45 crc kubenswrapper[4667]: E0131 04:43:45.284471 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:43:59 crc kubenswrapper[4667]: I0131 04:43:59.282285 4667 scope.go:117] "RemoveContainer" containerID="4a024f5baac5fed9fbfd2275beffa02329fe0d56f2f10277fb0cd58a753b185a" Jan 31 04:43:59 crc kubenswrapper[4667]: E0131 04:43:59.283590 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:44:13 crc kubenswrapper[4667]: I0131 04:44:13.289061 4667 scope.go:117] "RemoveContainer" containerID="4a024f5baac5fed9fbfd2275beffa02329fe0d56f2f10277fb0cd58a753b185a" Jan 31 04:44:13 crc kubenswrapper[4667]: E0131 04:44:13.292249 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:44:21 crc kubenswrapper[4667]: I0131 04:44:21.454924 4667 generic.go:334] "Generic (PLEG): container finished" podID="227ed998-7d50-4a65-9b37-9452bf89d69f" containerID="d8197a305767c5325e1d4904e7c29d2b9f1c0c2684a7b7d34611b70958630820" exitCode=0 Jan 31 04:44:21 crc kubenswrapper[4667]: I0131 04:44:21.455011 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4kjf7/crc-debug-4285t" event={"ID":"227ed998-7d50-4a65-9b37-9452bf89d69f","Type":"ContainerDied","Data":"d8197a305767c5325e1d4904e7c29d2b9f1c0c2684a7b7d34611b70958630820"} Jan 31 04:44:22 crc kubenswrapper[4667]: I0131 04:44:22.575936 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4kjf7/crc-debug-4285t" Jan 31 04:44:22 crc kubenswrapper[4667]: I0131 04:44:22.580425 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/227ed998-7d50-4a65-9b37-9452bf89d69f-host\") pod \"227ed998-7d50-4a65-9b37-9452bf89d69f\" (UID: \"227ed998-7d50-4a65-9b37-9452bf89d69f\") " Jan 31 04:44:22 crc kubenswrapper[4667]: I0131 04:44:22.580477 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gs82t\" (UniqueName: \"kubernetes.io/projected/227ed998-7d50-4a65-9b37-9452bf89d69f-kube-api-access-gs82t\") pod \"227ed998-7d50-4a65-9b37-9452bf89d69f\" (UID: \"227ed998-7d50-4a65-9b37-9452bf89d69f\") " Jan 31 04:44:22 crc kubenswrapper[4667]: I0131 04:44:22.580692 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/227ed998-7d50-4a65-9b37-9452bf89d69f-host" (OuterVolumeSpecName: "host") pod "227ed998-7d50-4a65-9b37-9452bf89d69f" (UID: "227ed998-7d50-4a65-9b37-9452bf89d69f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:44:22 crc kubenswrapper[4667]: I0131 04:44:22.581219 4667 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/227ed998-7d50-4a65-9b37-9452bf89d69f-host\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:22 crc kubenswrapper[4667]: I0131 04:44:22.589525 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/227ed998-7d50-4a65-9b37-9452bf89d69f-kube-api-access-gs82t" (OuterVolumeSpecName: "kube-api-access-gs82t") pod "227ed998-7d50-4a65-9b37-9452bf89d69f" (UID: "227ed998-7d50-4a65-9b37-9452bf89d69f"). InnerVolumeSpecName "kube-api-access-gs82t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:44:22 crc kubenswrapper[4667]: I0131 04:44:22.622815 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4kjf7/crc-debug-4285t"] Jan 31 04:44:22 crc kubenswrapper[4667]: I0131 04:44:22.631823 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4kjf7/crc-debug-4285t"] Jan 31 04:44:22 crc kubenswrapper[4667]: I0131 04:44:22.683305 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gs82t\" (UniqueName: \"kubernetes.io/projected/227ed998-7d50-4a65-9b37-9452bf89d69f-kube-api-access-gs82t\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:23 crc kubenswrapper[4667]: I0131 04:44:23.300951 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="227ed998-7d50-4a65-9b37-9452bf89d69f" path="/var/lib/kubelet/pods/227ed998-7d50-4a65-9b37-9452bf89d69f/volumes" Jan 31 04:44:23 crc kubenswrapper[4667]: I0131 04:44:23.478517 4667 scope.go:117] "RemoveContainer" containerID="d8197a305767c5325e1d4904e7c29d2b9f1c0c2684a7b7d34611b70958630820" Jan 31 04:44:23 crc kubenswrapper[4667]: I0131 04:44:23.478688 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4kjf7/crc-debug-4285t" Jan 31 04:44:23 crc kubenswrapper[4667]: I0131 04:44:23.829176 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4kjf7/crc-debug-pcgwl"] Jan 31 04:44:23 crc kubenswrapper[4667]: E0131 04:44:23.829994 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="227ed998-7d50-4a65-9b37-9452bf89d69f" containerName="container-00" Jan 31 04:44:23 crc kubenswrapper[4667]: I0131 04:44:23.830011 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="227ed998-7d50-4a65-9b37-9452bf89d69f" containerName="container-00" Jan 31 04:44:23 crc kubenswrapper[4667]: I0131 04:44:23.830232 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="227ed998-7d50-4a65-9b37-9452bf89d69f" containerName="container-00" Jan 31 04:44:23 crc kubenswrapper[4667]: I0131 04:44:23.830950 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4kjf7/crc-debug-pcgwl" Jan 31 04:44:23 crc kubenswrapper[4667]: I0131 04:44:23.833826 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-4kjf7"/"default-dockercfg-wmxq6" Jan 31 04:44:24 crc kubenswrapper[4667]: I0131 04:44:24.008683 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/22997113-bf1b-4524-91fc-1c581dd3495a-host\") pod \"crc-debug-pcgwl\" (UID: \"22997113-bf1b-4524-91fc-1c581dd3495a\") " pod="openshift-must-gather-4kjf7/crc-debug-pcgwl" Jan 31 04:44:24 crc kubenswrapper[4667]: I0131 04:44:24.008770 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r75tz\" (UniqueName: \"kubernetes.io/projected/22997113-bf1b-4524-91fc-1c581dd3495a-kube-api-access-r75tz\") pod \"crc-debug-pcgwl\" (UID: \"22997113-bf1b-4524-91fc-1c581dd3495a\") " pod="openshift-must-gather-4kjf7/crc-debug-pcgwl" Jan 31 04:44:24 crc kubenswrapper[4667]: I0131 04:44:24.110931 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/22997113-bf1b-4524-91fc-1c581dd3495a-host\") pod \"crc-debug-pcgwl\" (UID: \"22997113-bf1b-4524-91fc-1c581dd3495a\") " pod="openshift-must-gather-4kjf7/crc-debug-pcgwl" Jan 31 04:44:24 crc kubenswrapper[4667]: I0131 04:44:24.111004 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r75tz\" (UniqueName: \"kubernetes.io/projected/22997113-bf1b-4524-91fc-1c581dd3495a-kube-api-access-r75tz\") pod \"crc-debug-pcgwl\" (UID: \"22997113-bf1b-4524-91fc-1c581dd3495a\") " pod="openshift-must-gather-4kjf7/crc-debug-pcgwl" Jan 31 04:44:24 crc kubenswrapper[4667]: I0131 04:44:24.111175 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/22997113-bf1b-4524-91fc-1c581dd3495a-host\") pod \"crc-debug-pcgwl\" (UID: \"22997113-bf1b-4524-91fc-1c581dd3495a\") " pod="openshift-must-gather-4kjf7/crc-debug-pcgwl" Jan 31 04:44:24 crc kubenswrapper[4667]: I0131 04:44:24.137280 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r75tz\" (UniqueName: \"kubernetes.io/projected/22997113-bf1b-4524-91fc-1c581dd3495a-kube-api-access-r75tz\") pod \"crc-debug-pcgwl\" (UID: \"22997113-bf1b-4524-91fc-1c581dd3495a\") " pod="openshift-must-gather-4kjf7/crc-debug-pcgwl" Jan 31 04:44:24 crc kubenswrapper[4667]: I0131 04:44:24.151343 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4kjf7/crc-debug-pcgwl" Jan 31 04:44:24 crc kubenswrapper[4667]: I0131 04:44:24.493821 4667 generic.go:334] "Generic (PLEG): container finished" podID="22997113-bf1b-4524-91fc-1c581dd3495a" containerID="3c3b8d8a4877bfd4340474cc013e4fb3c2fd522ed282217dc780dbaccd0c9a80" exitCode=0 Jan 31 04:44:24 crc kubenswrapper[4667]: I0131 04:44:24.493914 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4kjf7/crc-debug-pcgwl" event={"ID":"22997113-bf1b-4524-91fc-1c581dd3495a","Type":"ContainerDied","Data":"3c3b8d8a4877bfd4340474cc013e4fb3c2fd522ed282217dc780dbaccd0c9a80"} Jan 31 04:44:24 crc kubenswrapper[4667]: I0131 04:44:24.494335 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4kjf7/crc-debug-pcgwl" event={"ID":"22997113-bf1b-4524-91fc-1c581dd3495a","Type":"ContainerStarted","Data":"86fc8914438990485c570cb4ff3e3f71e47f7781f1bd35af272a96c3e07a30a5"} Jan 31 04:44:24 crc kubenswrapper[4667]: E0131 04:44:24.720071 4667 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22997113_bf1b_4524_91fc_1c581dd3495a.slice/crio-3c3b8d8a4877bfd4340474cc013e4fb3c2fd522ed282217dc780dbaccd0c9a80.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22997113_bf1b_4524_91fc_1c581dd3495a.slice/crio-conmon-3c3b8d8a4877bfd4340474cc013e4fb3c2fd522ed282217dc780dbaccd0c9a80.scope\": RecentStats: unable to find data in memory cache]" Jan 31 04:44:24 crc kubenswrapper[4667]: I0131 04:44:24.910002 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4kjf7/crc-debug-pcgwl"] Jan 31 04:44:24 crc kubenswrapper[4667]: I0131 04:44:24.921440 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4kjf7/crc-debug-pcgwl"] Jan 31 04:44:25 crc kubenswrapper[4667]: I0131 04:44:25.628468 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4kjf7/crc-debug-pcgwl" Jan 31 04:44:25 crc kubenswrapper[4667]: I0131 04:44:25.749807 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r75tz\" (UniqueName: \"kubernetes.io/projected/22997113-bf1b-4524-91fc-1c581dd3495a-kube-api-access-r75tz\") pod \"22997113-bf1b-4524-91fc-1c581dd3495a\" (UID: \"22997113-bf1b-4524-91fc-1c581dd3495a\") " Jan 31 04:44:25 crc kubenswrapper[4667]: I0131 04:44:25.749898 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/22997113-bf1b-4524-91fc-1c581dd3495a-host\") pod \"22997113-bf1b-4524-91fc-1c581dd3495a\" (UID: \"22997113-bf1b-4524-91fc-1c581dd3495a\") " Jan 31 04:44:25 crc kubenswrapper[4667]: I0131 04:44:25.750026 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22997113-bf1b-4524-91fc-1c581dd3495a-host" (OuterVolumeSpecName: "host") pod "22997113-bf1b-4524-91fc-1c581dd3495a" (UID: "22997113-bf1b-4524-91fc-1c581dd3495a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:44:25 crc kubenswrapper[4667]: I0131 04:44:25.750758 4667 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/22997113-bf1b-4524-91fc-1c581dd3495a-host\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:25 crc kubenswrapper[4667]: I0131 04:44:25.763484 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22997113-bf1b-4524-91fc-1c581dd3495a-kube-api-access-r75tz" (OuterVolumeSpecName: "kube-api-access-r75tz") pod "22997113-bf1b-4524-91fc-1c581dd3495a" (UID: "22997113-bf1b-4524-91fc-1c581dd3495a"). InnerVolumeSpecName "kube-api-access-r75tz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:44:25 crc kubenswrapper[4667]: I0131 04:44:25.852905 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r75tz\" (UniqueName: \"kubernetes.io/projected/22997113-bf1b-4524-91fc-1c581dd3495a-kube-api-access-r75tz\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:26 crc kubenswrapper[4667]: I0131 04:44:26.124369 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4kjf7/crc-debug-t2pjs"] Jan 31 04:44:26 crc kubenswrapper[4667]: E0131 04:44:26.124878 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22997113-bf1b-4524-91fc-1c581dd3495a" containerName="container-00" Jan 31 04:44:26 crc kubenswrapper[4667]: I0131 04:44:26.124895 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="22997113-bf1b-4524-91fc-1c581dd3495a" containerName="container-00" Jan 31 04:44:26 crc kubenswrapper[4667]: I0131 04:44:26.125135 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="22997113-bf1b-4524-91fc-1c581dd3495a" containerName="container-00" Jan 31 04:44:26 crc kubenswrapper[4667]: I0131 04:44:26.125868 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4kjf7/crc-debug-t2pjs" Jan 31 04:44:26 crc kubenswrapper[4667]: I0131 04:44:26.261533 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/26dac6ca-27b0-4869-b7aa-3b7cc2e1c8bf-host\") pod \"crc-debug-t2pjs\" (UID: \"26dac6ca-27b0-4869-b7aa-3b7cc2e1c8bf\") " pod="openshift-must-gather-4kjf7/crc-debug-t2pjs" Jan 31 04:44:26 crc kubenswrapper[4667]: I0131 04:44:26.261626 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fb2g\" (UniqueName: \"kubernetes.io/projected/26dac6ca-27b0-4869-b7aa-3b7cc2e1c8bf-kube-api-access-5fb2g\") pod \"crc-debug-t2pjs\" (UID: \"26dac6ca-27b0-4869-b7aa-3b7cc2e1c8bf\") " pod="openshift-must-gather-4kjf7/crc-debug-t2pjs" Jan 31 04:44:26 crc kubenswrapper[4667]: I0131 04:44:26.281329 4667 scope.go:117] "RemoveContainer" containerID="4a024f5baac5fed9fbfd2275beffa02329fe0d56f2f10277fb0cd58a753b185a" Jan 31 04:44:26 crc kubenswrapper[4667]: E0131 04:44:26.281733 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:44:26 crc kubenswrapper[4667]: I0131 04:44:26.363461 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fb2g\" (UniqueName: \"kubernetes.io/projected/26dac6ca-27b0-4869-b7aa-3b7cc2e1c8bf-kube-api-access-5fb2g\") pod \"crc-debug-t2pjs\" (UID: \"26dac6ca-27b0-4869-b7aa-3b7cc2e1c8bf\") " pod="openshift-must-gather-4kjf7/crc-debug-t2pjs" Jan 31 04:44:26 crc kubenswrapper[4667]: I0131 04:44:26.363711 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/26dac6ca-27b0-4869-b7aa-3b7cc2e1c8bf-host\") pod \"crc-debug-t2pjs\" (UID: \"26dac6ca-27b0-4869-b7aa-3b7cc2e1c8bf\") " pod="openshift-must-gather-4kjf7/crc-debug-t2pjs" Jan 31 04:44:26 crc kubenswrapper[4667]: I0131 04:44:26.363892 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/26dac6ca-27b0-4869-b7aa-3b7cc2e1c8bf-host\") pod \"crc-debug-t2pjs\" (UID: \"26dac6ca-27b0-4869-b7aa-3b7cc2e1c8bf\") " pod="openshift-must-gather-4kjf7/crc-debug-t2pjs" Jan 31 04:44:26 crc kubenswrapper[4667]: I0131 04:44:26.411163 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fb2g\" (UniqueName: \"kubernetes.io/projected/26dac6ca-27b0-4869-b7aa-3b7cc2e1c8bf-kube-api-access-5fb2g\") pod \"crc-debug-t2pjs\" (UID: \"26dac6ca-27b0-4869-b7aa-3b7cc2e1c8bf\") " pod="openshift-must-gather-4kjf7/crc-debug-t2pjs" Jan 31 04:44:26 crc kubenswrapper[4667]: I0131 04:44:26.450555 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4kjf7/crc-debug-t2pjs" Jan 31 04:44:26 crc kubenswrapper[4667]: I0131 04:44:26.518455 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4kjf7/crc-debug-t2pjs" event={"ID":"26dac6ca-27b0-4869-b7aa-3b7cc2e1c8bf","Type":"ContainerStarted","Data":"6270c55158de263a5c38ad5c628c0754ae6ed3da66f7c65b71cf0f90d16c524a"} Jan 31 04:44:26 crc kubenswrapper[4667]: I0131 04:44:26.520531 4667 scope.go:117] "RemoveContainer" containerID="3c3b8d8a4877bfd4340474cc013e4fb3c2fd522ed282217dc780dbaccd0c9a80" Jan 31 04:44:26 crc kubenswrapper[4667]: I0131 04:44:26.520576 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4kjf7/crc-debug-pcgwl" Jan 31 04:44:27 crc kubenswrapper[4667]: I0131 04:44:27.302398 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22997113-bf1b-4524-91fc-1c581dd3495a" path="/var/lib/kubelet/pods/22997113-bf1b-4524-91fc-1c581dd3495a/volumes" Jan 31 04:44:27 crc kubenswrapper[4667]: I0131 04:44:27.539781 4667 generic.go:334] "Generic (PLEG): container finished" podID="26dac6ca-27b0-4869-b7aa-3b7cc2e1c8bf" containerID="e6b0e5005d77416d772e735bda67730424b7dc7b40ccc3b4d1d8346a0e29d374" exitCode=0 Jan 31 04:44:27 crc kubenswrapper[4667]: I0131 04:44:27.539866 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4kjf7/crc-debug-t2pjs" event={"ID":"26dac6ca-27b0-4869-b7aa-3b7cc2e1c8bf","Type":"ContainerDied","Data":"e6b0e5005d77416d772e735bda67730424b7dc7b40ccc3b4d1d8346a0e29d374"} Jan 31 04:44:27 crc kubenswrapper[4667]: I0131 04:44:27.601110 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4kjf7/crc-debug-t2pjs"] Jan 31 04:44:27 crc kubenswrapper[4667]: I0131 04:44:27.613695 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4kjf7/crc-debug-t2pjs"] Jan 31 04:44:28 crc kubenswrapper[4667]: I0131 04:44:28.700915 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4kjf7/crc-debug-t2pjs" Jan 31 04:44:28 crc kubenswrapper[4667]: I0131 04:44:28.727285 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fb2g\" (UniqueName: \"kubernetes.io/projected/26dac6ca-27b0-4869-b7aa-3b7cc2e1c8bf-kube-api-access-5fb2g\") pod \"26dac6ca-27b0-4869-b7aa-3b7cc2e1c8bf\" (UID: \"26dac6ca-27b0-4869-b7aa-3b7cc2e1c8bf\") " Jan 31 04:44:28 crc kubenswrapper[4667]: I0131 04:44:28.727750 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/26dac6ca-27b0-4869-b7aa-3b7cc2e1c8bf-host\") pod \"26dac6ca-27b0-4869-b7aa-3b7cc2e1c8bf\" (UID: \"26dac6ca-27b0-4869-b7aa-3b7cc2e1c8bf\") " Jan 31 04:44:28 crc kubenswrapper[4667]: I0131 04:44:28.728412 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26dac6ca-27b0-4869-b7aa-3b7cc2e1c8bf-host" (OuterVolumeSpecName: "host") pod "26dac6ca-27b0-4869-b7aa-3b7cc2e1c8bf" (UID: "26dac6ca-27b0-4869-b7aa-3b7cc2e1c8bf"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:44:28 crc kubenswrapper[4667]: I0131 04:44:28.740542 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26dac6ca-27b0-4869-b7aa-3b7cc2e1c8bf-kube-api-access-5fb2g" (OuterVolumeSpecName: "kube-api-access-5fb2g") pod "26dac6ca-27b0-4869-b7aa-3b7cc2e1c8bf" (UID: "26dac6ca-27b0-4869-b7aa-3b7cc2e1c8bf"). InnerVolumeSpecName "kube-api-access-5fb2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:44:28 crc kubenswrapper[4667]: I0131 04:44:28.832970 4667 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/26dac6ca-27b0-4869-b7aa-3b7cc2e1c8bf-host\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:28 crc kubenswrapper[4667]: I0131 04:44:28.833020 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fb2g\" (UniqueName: \"kubernetes.io/projected/26dac6ca-27b0-4869-b7aa-3b7cc2e1c8bf-kube-api-access-5fb2g\") on node \"crc\" DevicePath \"\"" Jan 31 04:44:29 crc kubenswrapper[4667]: I0131 04:44:29.302455 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26dac6ca-27b0-4869-b7aa-3b7cc2e1c8bf" path="/var/lib/kubelet/pods/26dac6ca-27b0-4869-b7aa-3b7cc2e1c8bf/volumes" Jan 31 04:44:29 crc kubenswrapper[4667]: I0131 04:44:29.566338 4667 scope.go:117] "RemoveContainer" containerID="e6b0e5005d77416d772e735bda67730424b7dc7b40ccc3b4d1d8346a0e29d374" Jan 31 04:44:29 crc kubenswrapper[4667]: I0131 04:44:29.566478 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4kjf7/crc-debug-t2pjs" Jan 31 04:44:38 crc kubenswrapper[4667]: I0131 04:44:38.282702 4667 scope.go:117] "RemoveContainer" containerID="4a024f5baac5fed9fbfd2275beffa02329fe0d56f2f10277fb0cd58a753b185a" Jan 31 04:44:38 crc kubenswrapper[4667]: E0131 04:44:38.284087 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:44:52 crc kubenswrapper[4667]: I0131 04:44:52.286787 4667 scope.go:117] "RemoveContainer" containerID="4a024f5baac5fed9fbfd2275beffa02329fe0d56f2f10277fb0cd58a753b185a" Jan 31 04:44:52 crc kubenswrapper[4667]: E0131 04:44:52.288315 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:44:57 crc kubenswrapper[4667]: I0131 04:44:57.542906 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7b9d9fcc56-wmjp8_d8b59858-7b18-4bad-b555-b978f3fbea56/barbican-api/0.log" Jan 31 04:44:57 crc kubenswrapper[4667]: I0131 04:44:57.850458 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5d8b947646-tj8c8_c6848ab0-06c2-4eed-9c5e-a1e205da260a/barbican-keystone-listener/0.log" Jan 31 04:44:58 crc kubenswrapper[4667]: I0131 04:44:58.121409 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5d8b947646-tj8c8_c6848ab0-06c2-4eed-9c5e-a1e205da260a/barbican-keystone-listener-log/0.log" Jan 31 04:44:58 crc kubenswrapper[4667]: I0131 04:44:58.159197 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-666d645645-4kb44_efc85fb0-e1c4-4a14-aeeb-a0526ff668d1/barbican-worker/0.log" Jan 31 04:44:58 crc kubenswrapper[4667]: I0131 04:44:58.398936 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7b9d9fcc56-wmjp8_d8b59858-7b18-4bad-b555-b978f3fbea56/barbican-api-log/0.log" Jan 31 04:44:58 crc kubenswrapper[4667]: I0131 04:44:58.674287 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-666d645645-4kb44_efc85fb0-e1c4-4a14-aeeb-a0526ff668d1/barbican-worker-log/0.log" Jan 31 04:44:58 crc kubenswrapper[4667]: I0131 04:44:58.768974 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-r92t5_24442823-d584-44f3-bf92-1e3382adb87f/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 04:44:58 crc kubenswrapper[4667]: I0131 04:44:58.958536 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ef1c8a6a-c6c2-451b-9030-9689f2ed116f/ceilometer-central-agent/0.log" Jan 31 04:44:58 crc kubenswrapper[4667]: I0131 04:44:58.964298 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ef1c8a6a-c6c2-451b-9030-9689f2ed116f/ceilometer-notification-agent/0.log" Jan 31 04:44:59 crc kubenswrapper[4667]: I0131 04:44:59.046409 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ef1c8a6a-c6c2-451b-9030-9689f2ed116f/proxy-httpd/0.log" Jan 31 04:44:59 crc kubenswrapper[4667]: I0131 04:44:59.104650 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ef1c8a6a-c6c2-451b-9030-9689f2ed116f/sg-core/0.log" Jan 31 04:44:59 crc kubenswrapper[4667]: I0131 04:44:59.327477 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_10513551-238c-4a99-83c9-2992fb1bbaae/cinder-api/0.log" Jan 31 04:44:59 crc kubenswrapper[4667]: I0131 04:44:59.345068 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_10513551-238c-4a99-83c9-2992fb1bbaae/cinder-api-log/0.log" Jan 31 04:44:59 crc kubenswrapper[4667]: I0131 04:44:59.584488 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c7e2e3d6-d3b6-49cf-b414-0ee3d0c72d6a/cinder-scheduler/0.log" Jan 31 04:44:59 crc kubenswrapper[4667]: I0131 04:44:59.591587 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c7e2e3d6-d3b6-49cf-b414-0ee3d0c72d6a/probe/0.log" Jan 31 04:44:59 crc kubenswrapper[4667]: I0131 04:44:59.835319 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-jpglr_2c49961f-cfd8-428d-b32b-4e3f85e554d5/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 04:44:59 crc kubenswrapper[4667]: I0131 04:44:59.916109 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-6j7k8_f2ba4344-86fc-4f0f-86ed-7daec27549ec/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 04:45:00 crc kubenswrapper[4667]: I0131 04:45:00.154364 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497245-c5tvw"] Jan 31 04:45:00 crc kubenswrapper[4667]: E0131 04:45:00.154913 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26dac6ca-27b0-4869-b7aa-3b7cc2e1c8bf" containerName="container-00" Jan 31 04:45:00 crc kubenswrapper[4667]: I0131 04:45:00.154936 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="26dac6ca-27b0-4869-b7aa-3b7cc2e1c8bf" containerName="container-00" Jan 31 04:45:00 crc kubenswrapper[4667]: I0131 04:45:00.155408 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="26dac6ca-27b0-4869-b7aa-3b7cc2e1c8bf" containerName="container-00" Jan 31 04:45:00 crc kubenswrapper[4667]: I0131 04:45:00.156119 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-c5tvw" Jan 31 04:45:00 crc kubenswrapper[4667]: I0131 04:45:00.169158 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 04:45:00 crc kubenswrapper[4667]: I0131 04:45:00.169428 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 04:45:00 crc kubenswrapper[4667]: I0131 04:45:00.171949 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497245-c5tvw"] Jan 31 04:45:00 crc kubenswrapper[4667]: I0131 04:45:00.177026 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-f4d4c4b7-jpndh_ab755590-ad93-4840-b261-9317b1c0cb54/init/0.log" Jan 31 04:45:00 crc kubenswrapper[4667]: I0131 04:45:00.273268 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/260ad817-b342-4429-9d66-4cd22eea9773-secret-volume\") pod \"collect-profiles-29497245-c5tvw\" (UID: \"260ad817-b342-4429-9d66-4cd22eea9773\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-c5tvw" Jan 31 04:45:00 crc kubenswrapper[4667]: I0131 04:45:00.273348 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljgnd\" (UniqueName: \"kubernetes.io/projected/260ad817-b342-4429-9d66-4cd22eea9773-kube-api-access-ljgnd\") pod \"collect-profiles-29497245-c5tvw\" (UID: \"260ad817-b342-4429-9d66-4cd22eea9773\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-c5tvw" Jan 31 04:45:00 crc kubenswrapper[4667]: I0131 04:45:00.273455 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/260ad817-b342-4429-9d66-4cd22eea9773-config-volume\") pod \"collect-profiles-29497245-c5tvw\" (UID: \"260ad817-b342-4429-9d66-4cd22eea9773\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-c5tvw" Jan 31 04:45:00 crc kubenswrapper[4667]: I0131 04:45:00.375650 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/260ad817-b342-4429-9d66-4cd22eea9773-config-volume\") pod \"collect-profiles-29497245-c5tvw\" (UID: \"260ad817-b342-4429-9d66-4cd22eea9773\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-c5tvw" Jan 31 04:45:00 crc kubenswrapper[4667]: I0131 04:45:00.375790 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/260ad817-b342-4429-9d66-4cd22eea9773-secret-volume\") pod \"collect-profiles-29497245-c5tvw\" (UID: \"260ad817-b342-4429-9d66-4cd22eea9773\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-c5tvw" Jan 31 04:45:00 crc kubenswrapper[4667]: I0131 04:45:00.375857 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljgnd\" (UniqueName: \"kubernetes.io/projected/260ad817-b342-4429-9d66-4cd22eea9773-kube-api-access-ljgnd\") pod \"collect-profiles-29497245-c5tvw\" (UID: \"260ad817-b342-4429-9d66-4cd22eea9773\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-c5tvw" Jan 31 04:45:00 crc kubenswrapper[4667]: I0131 04:45:00.377515 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/260ad817-b342-4429-9d66-4cd22eea9773-config-volume\") pod \"collect-profiles-29497245-c5tvw\" (UID: \"260ad817-b342-4429-9d66-4cd22eea9773\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-c5tvw" Jan 31 04:45:00 crc kubenswrapper[4667]: I0131 04:45:00.404647 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/260ad817-b342-4429-9d66-4cd22eea9773-secret-volume\") pod \"collect-profiles-29497245-c5tvw\" (UID: \"260ad817-b342-4429-9d66-4cd22eea9773\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-c5tvw" Jan 31 04:45:00 crc kubenswrapper[4667]: I0131 04:45:00.405659 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-f4d4c4b7-jpndh_ab755590-ad93-4840-b261-9317b1c0cb54/init/0.log" Jan 31 04:45:00 crc kubenswrapper[4667]: I0131 04:45:00.407798 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljgnd\" (UniqueName: \"kubernetes.io/projected/260ad817-b342-4429-9d66-4cd22eea9773-kube-api-access-ljgnd\") pod \"collect-profiles-29497245-c5tvw\" (UID: \"260ad817-b342-4429-9d66-4cd22eea9773\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-c5tvw" Jan 31 04:45:00 crc kubenswrapper[4667]: I0131 04:45:00.481523 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-c5tvw" Jan 31 04:45:00 crc kubenswrapper[4667]: I0131 04:45:00.615619 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-r7phz_15d1c9f5-7546-4262-ada1-71b362ddd67e/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 04:45:00 crc kubenswrapper[4667]: I0131 04:45:00.629274 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-f4d4c4b7-jpndh_ab755590-ad93-4840-b261-9317b1c0cb54/dnsmasq-dns/0.log" Jan 31 04:45:00 crc kubenswrapper[4667]: I0131 04:45:00.959320 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_e2e0a40d-c35f-443a-97b3-0150c13d56e4/glance-log/0.log" Jan 31 04:45:00 crc kubenswrapper[4667]: I0131 04:45:00.978484 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_e2e0a40d-c35f-443a-97b3-0150c13d56e4/glance-httpd/0.log" Jan 31 04:45:01 crc kubenswrapper[4667]: I0131 04:45:01.040888 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497245-c5tvw"] Jan 31 04:45:01 crc kubenswrapper[4667]: W0131 04:45:01.054043 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod260ad817_b342_4429_9d66_4cd22eea9773.slice/crio-8716af0b0b6695c37464cbae6d763278ceb950093098377cbe8348046dd66414 WatchSource:0}: Error finding container 8716af0b0b6695c37464cbae6d763278ceb950093098377cbe8348046dd66414: Status 404 returned error can't find the container with id 8716af0b0b6695c37464cbae6d763278ceb950093098377cbe8348046dd66414 Jan 31 04:45:01 crc kubenswrapper[4667]: I0131 04:45:01.254586 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b9aae903-8070-44c6-8826-ec0ff7d90139/glance-httpd/0.log" Jan 31 04:45:01 crc kubenswrapper[4667]: I0131 04:45:01.319584 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b9aae903-8070-44c6-8826-ec0ff7d90139/glance-log/0.log" Jan 31 04:45:01 crc kubenswrapper[4667]: I0131 04:45:01.512222 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-86c748c4d6-2grmh_c6974567-3bea-447a-bb8b-ced22b6d34ce/horizon/3.log" Jan 31 04:45:01 crc kubenswrapper[4667]: I0131 04:45:01.631788 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-86c748c4d6-2grmh_c6974567-3bea-447a-bb8b-ced22b6d34ce/horizon/2.log" Jan 31 04:45:01 crc kubenswrapper[4667]: I0131 04:45:01.815613 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-86c748c4d6-2grmh_c6974567-3bea-447a-bb8b-ced22b6d34ce/horizon-log/0.log" Jan 31 04:45:01 crc kubenswrapper[4667]: I0131 04:45:01.889290 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-c5tvw" event={"ID":"260ad817-b342-4429-9d66-4cd22eea9773","Type":"ContainerStarted","Data":"8306d2a3b9356151602a181ca3a221a56a6121f3670590ef6378b48babbbcc82"} Jan 31 04:45:01 crc kubenswrapper[4667]: I0131 04:45:01.889364 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-c5tvw" event={"ID":"260ad817-b342-4429-9d66-4cd22eea9773","Type":"ContainerStarted","Data":"8716af0b0b6695c37464cbae6d763278ceb950093098377cbe8348046dd66414"} Jan 31 04:45:01 crc kubenswrapper[4667]: I0131 04:45:01.923452 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-c5tvw" podStartSLOduration=1.923419278 podStartE2EDuration="1.923419278s" podCreationTimestamp="2026-01-31 04:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:45:01.914378041 +0000 UTC m=+3425.430713340" watchObservedRunningTime="2026-01-31 04:45:01.923419278 +0000 UTC m=+3425.439754577" Jan 31 04:45:01 crc kubenswrapper[4667]: I0131 04:45:01.994263 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-pcntk_f33e0c1e-9f27-49a0-8132-0516b49d5ceb/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 04:45:02 crc kubenswrapper[4667]: I0131 04:45:02.141790 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-2xhw7_c1426178-3085-452c-8da2-15a2bce73a55/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 04:45:02 crc kubenswrapper[4667]: I0131 04:45:02.390171 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5558665b54-mq2t5_2cf275de-3442-4fe5-ab8b-a4796c0bc829/keystone-api/0.log" Jan 31 04:45:02 crc kubenswrapper[4667]: I0131 04:45:02.448359 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_ee717f47-2475-42f9-b4ce-25960d0fa24c/kube-state-metrics/0.log" Jan 31 04:45:02 crc kubenswrapper[4667]: I0131 04:45:02.471526 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-dtp9k_a8376acd-0ea2-4ac1-a843-59932a976b4e/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 04:45:02 crc kubenswrapper[4667]: I0131 04:45:02.830257 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7f55cc74b5-gg8dl_48966487-81e5-4e5d-9a74-fbbf2b1091ae/neutron-httpd/0.log" Jan 31 04:45:02 crc kubenswrapper[4667]: I0131 04:45:02.857878 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7f55cc74b5-gg8dl_48966487-81e5-4e5d-9a74-fbbf2b1091ae/neutron-api/0.log" Jan 31 04:45:02 crc kubenswrapper[4667]: I0131 04:45:02.904596 4667 generic.go:334] "Generic (PLEG): container finished" podID="260ad817-b342-4429-9d66-4cd22eea9773" containerID="8306d2a3b9356151602a181ca3a221a56a6121f3670590ef6378b48babbbcc82" exitCode=0 Jan 31 04:45:02 crc kubenswrapper[4667]: I0131 04:45:02.904663 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-c5tvw" event={"ID":"260ad817-b342-4429-9d66-4cd22eea9773","Type":"ContainerDied","Data":"8306d2a3b9356151602a181ca3a221a56a6121f3670590ef6378b48babbbcc82"} Jan 31 04:45:03 crc kubenswrapper[4667]: I0131 04:45:03.182830 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb_92bb44a8-6936-4c3f-96f6-b9572d90574d/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 04:45:03 crc kubenswrapper[4667]: I0131 04:45:03.576219 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_162d25d8-8fbe-4a52-808b-971f2017bfc0/nova-api-log/0.log" Jan 31 04:45:03 crc kubenswrapper[4667]: I0131 04:45:03.752001 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_162d25d8-8fbe-4a52-808b-971f2017bfc0/nova-api-api/0.log" Jan 31 04:45:04 crc kubenswrapper[4667]: I0131 04:45:04.091968 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_841e82c7-29d0-414e-a01d-05718a83749b/nova-cell0-conductor-conductor/0.log" Jan 31 04:45:04 crc kubenswrapper[4667]: I0131 04:45:04.227010 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_172b5953-ebb3-4eae-b8ee-33d59574f2ac/nova-cell1-conductor-conductor/0.log" Jan 31 04:45:04 crc kubenswrapper[4667]: I0131 04:45:04.369590 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-c5tvw" Jan 31 04:45:04 crc kubenswrapper[4667]: I0131 04:45:04.380776 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/260ad817-b342-4429-9d66-4cd22eea9773-config-volume\") pod \"260ad817-b342-4429-9d66-4cd22eea9773\" (UID: \"260ad817-b342-4429-9d66-4cd22eea9773\") " Jan 31 04:45:04 crc kubenswrapper[4667]: I0131 04:45:04.380900 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/260ad817-b342-4429-9d66-4cd22eea9773-secret-volume\") pod \"260ad817-b342-4429-9d66-4cd22eea9773\" (UID: \"260ad817-b342-4429-9d66-4cd22eea9773\") " Jan 31 04:45:04 crc kubenswrapper[4667]: I0131 04:45:04.380934 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljgnd\" (UniqueName: \"kubernetes.io/projected/260ad817-b342-4429-9d66-4cd22eea9773-kube-api-access-ljgnd\") pod \"260ad817-b342-4429-9d66-4cd22eea9773\" (UID: \"260ad817-b342-4429-9d66-4cd22eea9773\") " Jan 31 04:45:04 crc kubenswrapper[4667]: I0131 04:45:04.382693 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/260ad817-b342-4429-9d66-4cd22eea9773-config-volume" (OuterVolumeSpecName: "config-volume") pod "260ad817-b342-4429-9d66-4cd22eea9773" (UID: "260ad817-b342-4429-9d66-4cd22eea9773"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 04:45:04 crc kubenswrapper[4667]: I0131 04:45:04.396661 4667 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/260ad817-b342-4429-9d66-4cd22eea9773-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:04 crc kubenswrapper[4667]: I0131 04:45:04.396868 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/260ad817-b342-4429-9d66-4cd22eea9773-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "260ad817-b342-4429-9d66-4cd22eea9773" (UID: "260ad817-b342-4429-9d66-4cd22eea9773"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 04:45:04 crc kubenswrapper[4667]: I0131 04:45:04.399797 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_4aa65868-008b-4a37-ba24-d4d3872c00c7/nova-cell1-novncproxy-novncproxy/0.log" Jan 31 04:45:04 crc kubenswrapper[4667]: I0131 04:45:04.425039 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/260ad817-b342-4429-9d66-4cd22eea9773-kube-api-access-ljgnd" (OuterVolumeSpecName: "kube-api-access-ljgnd") pod "260ad817-b342-4429-9d66-4cd22eea9773" (UID: "260ad817-b342-4429-9d66-4cd22eea9773"). InnerVolumeSpecName "kube-api-access-ljgnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:45:04 crc kubenswrapper[4667]: I0131 04:45:04.498973 4667 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/260ad817-b342-4429-9d66-4cd22eea9773-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:04 crc kubenswrapper[4667]: I0131 04:45:04.499003 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljgnd\" (UniqueName: \"kubernetes.io/projected/260ad817-b342-4429-9d66-4cd22eea9773-kube-api-access-ljgnd\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:04 crc kubenswrapper[4667]: I0131 04:45:04.707200 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-7swhs_8d2d3410-e5e4-4607-ab3c-74199d66293d/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 04:45:04 crc kubenswrapper[4667]: I0131 04:45:04.869777 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_b91fdcb3-e7f6-40d0-97d1-4db13213d61a/nova-metadata-log/0.log" Jan 31 04:45:04 crc kubenswrapper[4667]: I0131 04:45:04.937184 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-c5tvw" event={"ID":"260ad817-b342-4429-9d66-4cd22eea9773","Type":"ContainerDied","Data":"8716af0b0b6695c37464cbae6d763278ceb950093098377cbe8348046dd66414"} Jan 31 04:45:04 crc kubenswrapper[4667]: I0131 04:45:04.937234 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8716af0b0b6695c37464cbae6d763278ceb950093098377cbe8348046dd66414" Jan 31 04:45:04 crc kubenswrapper[4667]: I0131 04:45:04.937254 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497245-c5tvw" Jan 31 04:45:05 crc kubenswrapper[4667]: I0131 04:45:05.027272 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497200-jtnvz"] Jan 31 04:45:05 crc kubenswrapper[4667]: I0131 04:45:05.066321 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497200-jtnvz"] Jan 31 04:45:05 crc kubenswrapper[4667]: I0131 04:45:05.282776 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_cf1db9a1-f45c-41f0-8d76-2c0318f0299b/nova-scheduler-scheduler/0.log" Jan 31 04:45:05 crc kubenswrapper[4667]: I0131 04:45:05.318421 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="704ddfdd-061e-4dff-a878-3c0755c07a6d" path="/var/lib/kubelet/pods/704ddfdd-061e-4dff-a878-3c0755c07a6d/volumes" Jan 31 04:45:05 crc kubenswrapper[4667]: I0131 04:45:05.466802 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7/mysql-bootstrap/0.log" Jan 31 04:45:05 crc kubenswrapper[4667]: I0131 04:45:05.693301 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7/mysql-bootstrap/0.log" Jan 31 04:45:05 crc kubenswrapper[4667]: I0131 04:45:05.765121 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7/galera/0.log" Jan 31 04:45:06 crc kubenswrapper[4667]: I0131 04:45:06.004015 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_fc6e0899-ca0f-4aac-8510-cf35066a3290/mysql-bootstrap/0.log" Jan 31 04:45:06 crc kubenswrapper[4667]: I0131 04:45:06.214662 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_b91fdcb3-e7f6-40d0-97d1-4db13213d61a/nova-metadata-metadata/0.log" Jan 31 04:45:06 crc kubenswrapper[4667]: I0131 04:45:06.240333 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_fc6e0899-ca0f-4aac-8510-cf35066a3290/galera/0.log" Jan 31 04:45:06 crc kubenswrapper[4667]: I0131 04:45:06.277475 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_fc6e0899-ca0f-4aac-8510-cf35066a3290/mysql-bootstrap/0.log" Jan 31 04:45:06 crc kubenswrapper[4667]: I0131 04:45:06.562754 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_c47c09d9-21e3-4c10-936f-0d679cf6a8f1/openstackclient/0.log" Jan 31 04:45:06 crc kubenswrapper[4667]: I0131 04:45:06.580212 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-cn9wc_39c3d98f-a6b1-4558-b565-c9f8c3afa543/ovn-controller/0.log" Jan 31 04:45:06 crc kubenswrapper[4667]: I0131 04:45:06.846803 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-hbhzb_73d60e7c-9a2f-4e04-8b13-31956316c5dc/openstack-network-exporter/0.log" Jan 31 04:45:07 crc kubenswrapper[4667]: I0131 04:45:07.012950 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-m545l_c3c43380-7b18-44fd-98f5-b9016923cdcb/ovsdb-server-init/0.log" Jan 31 04:45:07 crc kubenswrapper[4667]: I0131 04:45:07.188789 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-m545l_c3c43380-7b18-44fd-98f5-b9016923cdcb/ovs-vswitchd/0.log" Jan 31 04:45:07 crc kubenswrapper[4667]: I0131 04:45:07.203545 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-m545l_c3c43380-7b18-44fd-98f5-b9016923cdcb/ovsdb-server-init/0.log" Jan 31 04:45:07 crc kubenswrapper[4667]: I0131 04:45:07.306909 4667 scope.go:117] "RemoveContainer" containerID="4a024f5baac5fed9fbfd2275beffa02329fe0d56f2f10277fb0cd58a753b185a" Jan 31 04:45:07 crc kubenswrapper[4667]: E0131 04:45:07.307246 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:45:07 crc kubenswrapper[4667]: I0131 04:45:07.315131 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-m545l_c3c43380-7b18-44fd-98f5-b9016923cdcb/ovsdb-server/0.log" Jan 31 04:45:07 crc kubenswrapper[4667]: I0131 04:45:07.607584 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-rdvv9_68a411c8-a168-43be-997d-d8a1313da926/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 04:45:07 crc kubenswrapper[4667]: I0131 04:45:07.694152 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1e5983aa-121c-4344-884a-438181c3ac0d/openstack-network-exporter/0.log" Jan 31 04:45:07 crc kubenswrapper[4667]: I0131 04:45:07.763976 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1e5983aa-121c-4344-884a-438181c3ac0d/ovn-northd/0.log" Jan 31 04:45:08 crc kubenswrapper[4667]: I0131 04:45:08.002865 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z57jn"] Jan 31 04:45:08 crc kubenswrapper[4667]: E0131 04:45:08.003399 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="260ad817-b342-4429-9d66-4cd22eea9773" containerName="collect-profiles" Jan 31 04:45:08 crc kubenswrapper[4667]: I0131 04:45:08.003421 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="260ad817-b342-4429-9d66-4cd22eea9773" containerName="collect-profiles" Jan 31 04:45:08 crc kubenswrapper[4667]: I0131 04:45:08.003642 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="260ad817-b342-4429-9d66-4cd22eea9773" containerName="collect-profiles" Jan 31 04:45:08 crc kubenswrapper[4667]: I0131 04:45:08.005072 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z57jn" Jan 31 04:45:08 crc kubenswrapper[4667]: I0131 04:45:08.024830 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z57jn"] Jan 31 04:45:08 crc kubenswrapper[4667]: I0131 04:45:08.052887 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_5b881387-78fb-40db-8985-412849ad9068/openstack-network-exporter/0.log" Jan 31 04:45:08 crc kubenswrapper[4667]: I0131 04:45:08.058049 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_5b881387-78fb-40db-8985-412849ad9068/ovsdbserver-nb/0.log" Jan 31 04:45:08 crc kubenswrapper[4667]: I0131 04:45:08.113305 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f63268bd-d150-4fbc-a77d-bf50a48c1e69-utilities\") pod \"redhat-operators-z57jn\" (UID: \"f63268bd-d150-4fbc-a77d-bf50a48c1e69\") " pod="openshift-marketplace/redhat-operators-z57jn" Jan 31 04:45:08 crc kubenswrapper[4667]: I0131 04:45:08.113434 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f63268bd-d150-4fbc-a77d-bf50a48c1e69-catalog-content\") pod \"redhat-operators-z57jn\" (UID: \"f63268bd-d150-4fbc-a77d-bf50a48c1e69\") " pod="openshift-marketplace/redhat-operators-z57jn" Jan 31 04:45:08 crc kubenswrapper[4667]: I0131 04:45:08.113510 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trng7\" (UniqueName: \"kubernetes.io/projected/f63268bd-d150-4fbc-a77d-bf50a48c1e69-kube-api-access-trng7\") pod \"redhat-operators-z57jn\" (UID: \"f63268bd-d150-4fbc-a77d-bf50a48c1e69\") " pod="openshift-marketplace/redhat-operators-z57jn" Jan 31 04:45:08 crc kubenswrapper[4667]: I0131 04:45:08.215501 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trng7\" (UniqueName: \"kubernetes.io/projected/f63268bd-d150-4fbc-a77d-bf50a48c1e69-kube-api-access-trng7\") pod \"redhat-operators-z57jn\" (UID: \"f63268bd-d150-4fbc-a77d-bf50a48c1e69\") " pod="openshift-marketplace/redhat-operators-z57jn" Jan 31 04:45:08 crc kubenswrapper[4667]: I0131 04:45:08.215592 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f63268bd-d150-4fbc-a77d-bf50a48c1e69-utilities\") pod \"redhat-operators-z57jn\" (UID: \"f63268bd-d150-4fbc-a77d-bf50a48c1e69\") " pod="openshift-marketplace/redhat-operators-z57jn" Jan 31 04:45:08 crc kubenswrapper[4667]: I0131 04:45:08.215674 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f63268bd-d150-4fbc-a77d-bf50a48c1e69-catalog-content\") pod \"redhat-operators-z57jn\" (UID: \"f63268bd-d150-4fbc-a77d-bf50a48c1e69\") " pod="openshift-marketplace/redhat-operators-z57jn" Jan 31 04:45:08 crc kubenswrapper[4667]: I0131 04:45:08.216195 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f63268bd-d150-4fbc-a77d-bf50a48c1e69-catalog-content\") pod \"redhat-operators-z57jn\" (UID: \"f63268bd-d150-4fbc-a77d-bf50a48c1e69\") " pod="openshift-marketplace/redhat-operators-z57jn" Jan 31 04:45:08 crc kubenswrapper[4667]: I0131 04:45:08.216309 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f63268bd-d150-4fbc-a77d-bf50a48c1e69-utilities\") pod \"redhat-operators-z57jn\" (UID: \"f63268bd-d150-4fbc-a77d-bf50a48c1e69\") " pod="openshift-marketplace/redhat-operators-z57jn" Jan 31 04:45:08 crc kubenswrapper[4667]: I0131 04:45:08.248377 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trng7\" (UniqueName: \"kubernetes.io/projected/f63268bd-d150-4fbc-a77d-bf50a48c1e69-kube-api-access-trng7\") pod \"redhat-operators-z57jn\" (UID: \"f63268bd-d150-4fbc-a77d-bf50a48c1e69\") " pod="openshift-marketplace/redhat-operators-z57jn" Jan 31 04:45:08 crc kubenswrapper[4667]: I0131 04:45:08.381023 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z57jn" Jan 31 04:45:08 crc kubenswrapper[4667]: I0131 04:45:08.513692 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7/openstack-network-exporter/0.log" Jan 31 04:45:08 crc kubenswrapper[4667]: I0131 04:45:08.655534 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7/ovsdbserver-sb/0.log" Jan 31 04:45:08 crc kubenswrapper[4667]: I0131 04:45:08.846615 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5f87b7b68-pjkwf_95dba098-f46c-4948-ab9b-c05d9bf48660/placement-api/0.log" Jan 31 04:45:08 crc kubenswrapper[4667]: I0131 04:45:08.949496 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z57jn"] Jan 31 04:45:09 crc kubenswrapper[4667]: I0131 04:45:09.001738 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z57jn" event={"ID":"f63268bd-d150-4fbc-a77d-bf50a48c1e69","Type":"ContainerStarted","Data":"93daba788c165eb2def797790bf390e80584151bb9c7f8cff0d73fb8825b2b60"} Jan 31 04:45:09 crc kubenswrapper[4667]: I0131 04:45:09.090630 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_acadb76e-2e9d-4af4-a5d1-fb5f28b006c6/setup-container/0.log" Jan 31 04:45:09 crc kubenswrapper[4667]: I0131 04:45:09.128574 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5f87b7b68-pjkwf_95dba098-f46c-4948-ab9b-c05d9bf48660/placement-log/0.log" Jan 31 04:45:09 crc kubenswrapper[4667]: I0131 04:45:09.664830 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_acadb76e-2e9d-4af4-a5d1-fb5f28b006c6/rabbitmq/0.log" Jan 31 04:45:09 crc kubenswrapper[4667]: I0131 04:45:09.670239 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_aca13392-5591-4b68-9948-c5e5fe558803/setup-container/0.log" Jan 31 04:45:09 crc kubenswrapper[4667]: I0131 04:45:09.798192 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_acadb76e-2e9d-4af4-a5d1-fb5f28b006c6/setup-container/0.log" Jan 31 04:45:10 crc kubenswrapper[4667]: I0131 04:45:10.040771 4667 generic.go:334] "Generic (PLEG): container finished" podID="f63268bd-d150-4fbc-a77d-bf50a48c1e69" containerID="9e393aed50c55b61a2687274140374ad5eea7a63e632565fd90b5406b0bb6a61" exitCode=0 Jan 31 04:45:10 crc kubenswrapper[4667]: I0131 04:45:10.041123 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z57jn" event={"ID":"f63268bd-d150-4fbc-a77d-bf50a48c1e69","Type":"ContainerDied","Data":"9e393aed50c55b61a2687274140374ad5eea7a63e632565fd90b5406b0bb6a61"} Jan 31 04:45:10 crc kubenswrapper[4667]: I0131 04:45:10.050908 4667 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 04:45:10 crc kubenswrapper[4667]: I0131 04:45:10.298223 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_aca13392-5591-4b68-9948-c5e5fe558803/setup-container/0.log" Jan 31 04:45:10 crc kubenswrapper[4667]: I0131 04:45:10.458385 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-xf5nb_65aa0404-25e7-4a24-8edf-ceae5320b02e/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 04:45:10 crc kubenswrapper[4667]: I0131 04:45:10.508961 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_aca13392-5591-4b68-9948-c5e5fe558803/rabbitmq/0.log" Jan 31 04:45:10 crc kubenswrapper[4667]: I0131 04:45:10.792367 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-mkct2_c6e23bd4-49c8-4691-ab45-5426e6c3cc6f/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 04:45:10 crc kubenswrapper[4667]: I0131 04:45:10.892867 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-cfn5q_500e62ac-7319-4438-ab89-c072499f717c/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 04:45:11 crc kubenswrapper[4667]: I0131 04:45:11.106685 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-8fd84_10997808-cd78-4267-b7a3-7ea36b948a60/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 04:45:11 crc kubenswrapper[4667]: I0131 04:45:11.326829 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-h2dbq_5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8/ssh-known-hosts-edpm-deployment/0.log" Jan 31 04:45:11 crc kubenswrapper[4667]: I0131 04:45:11.576542 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-8bff87d99-j8cd2_30fc5b26-45dd-42f8-9a58-7ba07c5aa56a/proxy-server/0.log" Jan 31 04:45:11 crc kubenswrapper[4667]: I0131 04:45:11.576942 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-8bff87d99-j8cd2_30fc5b26-45dd-42f8-9a58-7ba07c5aa56a/proxy-httpd/0.log" Jan 31 04:45:11 crc kubenswrapper[4667]: I0131 04:45:11.814744 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-fpj9r_65cc9566-177a-41b5-b00c-83290fa14641/swift-ring-rebalance/0.log" Jan 31 04:45:11 crc kubenswrapper[4667]: I0131 04:45:11.947297 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49dfb349-068e-4574-9e26-3d413295d983/account-auditor/0.log" Jan 31 04:45:12 crc kubenswrapper[4667]: I0131 04:45:12.083562 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z57jn" event={"ID":"f63268bd-d150-4fbc-a77d-bf50a48c1e69","Type":"ContainerStarted","Data":"b42f879034013c6b57ac76143f96922b06d8b7164c44bec6868295daed87cc4e"} Jan 31 04:45:12 crc kubenswrapper[4667]: I0131 04:45:12.099646 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49dfb349-068e-4574-9e26-3d413295d983/account-reaper/0.log" Jan 31 04:45:12 crc kubenswrapper[4667]: I0131 04:45:12.121994 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49dfb349-068e-4574-9e26-3d413295d983/account-replicator/0.log" Jan 31 04:45:12 crc kubenswrapper[4667]: I0131 04:45:12.208995 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49dfb349-068e-4574-9e26-3d413295d983/account-server/0.log" Jan 31 04:45:12 crc kubenswrapper[4667]: I0131 04:45:12.290994 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49dfb349-068e-4574-9e26-3d413295d983/container-auditor/0.log" Jan 31 04:45:12 crc kubenswrapper[4667]: I0131 04:45:12.410732 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49dfb349-068e-4574-9e26-3d413295d983/container-replicator/0.log" Jan 31 04:45:12 crc kubenswrapper[4667]: I0131 04:45:12.423559 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49dfb349-068e-4574-9e26-3d413295d983/container-server/0.log" Jan 31 04:45:12 crc kubenswrapper[4667]: I0131 04:45:12.601892 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49dfb349-068e-4574-9e26-3d413295d983/container-updater/0.log" Jan 31 04:45:12 crc kubenswrapper[4667]: I0131 04:45:12.673264 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49dfb349-068e-4574-9e26-3d413295d983/object-auditor/0.log" Jan 31 04:45:12 crc kubenswrapper[4667]: I0131 04:45:12.678352 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49dfb349-068e-4574-9e26-3d413295d983/object-expirer/0.log" Jan 31 04:45:12 crc kubenswrapper[4667]: I0131 04:45:12.852497 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49dfb349-068e-4574-9e26-3d413295d983/object-replicator/0.log" Jan 31 04:45:12 crc kubenswrapper[4667]: I0131 04:45:12.911527 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49dfb349-068e-4574-9e26-3d413295d983/object-server/0.log" Jan 31 04:45:13 crc kubenswrapper[4667]: I0131 04:45:13.013281 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49dfb349-068e-4574-9e26-3d413295d983/rsync/0.log" Jan 31 04:45:13 crc kubenswrapper[4667]: I0131 04:45:13.029594 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49dfb349-068e-4574-9e26-3d413295d983/object-updater/0.log" Jan 31 04:45:13 crc kubenswrapper[4667]: I0131 04:45:13.217731 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49dfb349-068e-4574-9e26-3d413295d983/swift-recon-cron/0.log" Jan 31 04:45:13 crc kubenswrapper[4667]: I0131 04:45:13.460280 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c_c2249d9c-021c-4dbf-8770-767be19d9404/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 04:45:13 crc kubenswrapper[4667]: I0131 04:45:13.566503 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_6f4da9b8-1fb2-4d7c-b933-d5749919e9d1/tempest-tests-tempest-tests-runner/0.log" Jan 31 04:45:13 crc kubenswrapper[4667]: I0131 04:45:13.788933 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_dd17b156-9377-4bd0-ab7d-80b57f81c79c/test-operator-logs-container/0.log" Jan 31 04:45:13 crc kubenswrapper[4667]: I0131 04:45:13.914852 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-972xp_39f585ed-5556-4f88-b5c0-3b6da9807764/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 04:45:18 crc kubenswrapper[4667]: I0131 04:45:18.150197 4667 generic.go:334] "Generic (PLEG): container finished" podID="f63268bd-d150-4fbc-a77d-bf50a48c1e69" containerID="b42f879034013c6b57ac76143f96922b06d8b7164c44bec6868295daed87cc4e" exitCode=0 Jan 31 04:45:18 crc kubenswrapper[4667]: I0131 04:45:18.152173 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z57jn" event={"ID":"f63268bd-d150-4fbc-a77d-bf50a48c1e69","Type":"ContainerDied","Data":"b42f879034013c6b57ac76143f96922b06d8b7164c44bec6868295daed87cc4e"} Jan 31 04:45:19 crc kubenswrapper[4667]: I0131 04:45:19.165050 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z57jn" event={"ID":"f63268bd-d150-4fbc-a77d-bf50a48c1e69","Type":"ContainerStarted","Data":"1f54789909eef3287e9c397be8d8d6da279becd5559d7b300f1f76b332b5fd33"} Jan 31 04:45:21 crc kubenswrapper[4667]: I0131 04:45:21.283112 4667 scope.go:117] "RemoveContainer" containerID="4a024f5baac5fed9fbfd2275beffa02329fe0d56f2f10277fb0cd58a753b185a" Jan 31 04:45:22 crc kubenswrapper[4667]: I0131 04:45:22.202057 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" event={"ID":"b103bbd2-fb5d-4b2a-8b01-c32f699757df","Type":"ContainerStarted","Data":"e341e89dc7f476e223c40775a5e4d587a91f2ee93cf266bd05345986bb0bfd8b"} Jan 31 04:45:22 crc kubenswrapper[4667]: I0131 04:45:22.228403 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z57jn" podStartSLOduration=6.569501858 podStartE2EDuration="15.228380131s" podCreationTimestamp="2026-01-31 04:45:07 +0000 UTC" firstStartedPulling="2026-01-31 04:45:10.050619009 +0000 UTC m=+3433.566954308" lastFinishedPulling="2026-01-31 04:45:18.709497282 +0000 UTC m=+3442.225832581" observedRunningTime="2026-01-31 04:45:19.191092813 +0000 UTC m=+3442.707428112" watchObservedRunningTime="2026-01-31 04:45:22.228380131 +0000 UTC m=+3445.744715430" Jan 31 04:45:22 crc kubenswrapper[4667]: I0131 04:45:22.607740 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_23e21efc-a978-4734-9fe2-f210ab9952f5/memcached/0.log" Jan 31 04:45:26 crc kubenswrapper[4667]: I0131 04:45:26.387237 4667 scope.go:117] "RemoveContainer" containerID="23c0ec42dbe697eaf0704b00e37db628311cffcda3636e6374cb1523a5355584" Jan 31 04:45:28 crc kubenswrapper[4667]: I0131 04:45:28.382604 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z57jn" Jan 31 04:45:28 crc kubenswrapper[4667]: I0131 04:45:28.383461 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z57jn" Jan 31 04:45:28 crc kubenswrapper[4667]: I0131 04:45:28.433733 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z57jn" Jan 31 04:45:29 crc kubenswrapper[4667]: I0131 04:45:29.325495 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z57jn" Jan 31 04:45:29 crc kubenswrapper[4667]: I0131 04:45:29.380674 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z57jn"] Jan 31 04:45:31 crc kubenswrapper[4667]: I0131 04:45:31.288963 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z57jn" podUID="f63268bd-d150-4fbc-a77d-bf50a48c1e69" containerName="registry-server" containerID="cri-o://1f54789909eef3287e9c397be8d8d6da279becd5559d7b300f1f76b332b5fd33" gracePeriod=2 Jan 31 04:45:31 crc kubenswrapper[4667]: I0131 04:45:31.818317 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z57jn" Jan 31 04:45:31 crc kubenswrapper[4667]: I0131 04:45:31.973315 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f63268bd-d150-4fbc-a77d-bf50a48c1e69-catalog-content\") pod \"f63268bd-d150-4fbc-a77d-bf50a48c1e69\" (UID: \"f63268bd-d150-4fbc-a77d-bf50a48c1e69\") " Jan 31 04:45:31 crc kubenswrapper[4667]: I0131 04:45:31.973485 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f63268bd-d150-4fbc-a77d-bf50a48c1e69-utilities\") pod \"f63268bd-d150-4fbc-a77d-bf50a48c1e69\" (UID: \"f63268bd-d150-4fbc-a77d-bf50a48c1e69\") " Jan 31 04:45:31 crc kubenswrapper[4667]: I0131 04:45:31.973567 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trng7\" (UniqueName: \"kubernetes.io/projected/f63268bd-d150-4fbc-a77d-bf50a48c1e69-kube-api-access-trng7\") pod \"f63268bd-d150-4fbc-a77d-bf50a48c1e69\" (UID: \"f63268bd-d150-4fbc-a77d-bf50a48c1e69\") " Jan 31 04:45:31 crc kubenswrapper[4667]: I0131 04:45:31.982246 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f63268bd-d150-4fbc-a77d-bf50a48c1e69-kube-api-access-trng7" (OuterVolumeSpecName: "kube-api-access-trng7") pod "f63268bd-d150-4fbc-a77d-bf50a48c1e69" (UID: "f63268bd-d150-4fbc-a77d-bf50a48c1e69"). InnerVolumeSpecName "kube-api-access-trng7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:45:31 crc kubenswrapper[4667]: I0131 04:45:31.987291 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f63268bd-d150-4fbc-a77d-bf50a48c1e69-utilities" (OuterVolumeSpecName: "utilities") pod "f63268bd-d150-4fbc-a77d-bf50a48c1e69" (UID: "f63268bd-d150-4fbc-a77d-bf50a48c1e69"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:45:32 crc kubenswrapper[4667]: I0131 04:45:32.076406 4667 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f63268bd-d150-4fbc-a77d-bf50a48c1e69-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:32 crc kubenswrapper[4667]: I0131 04:45:32.076442 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trng7\" (UniqueName: \"kubernetes.io/projected/f63268bd-d150-4fbc-a77d-bf50a48c1e69-kube-api-access-trng7\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:32 crc kubenswrapper[4667]: I0131 04:45:32.127069 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f63268bd-d150-4fbc-a77d-bf50a48c1e69-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f63268bd-d150-4fbc-a77d-bf50a48c1e69" (UID: "f63268bd-d150-4fbc-a77d-bf50a48c1e69"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:45:32 crc kubenswrapper[4667]: I0131 04:45:32.178356 4667 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f63268bd-d150-4fbc-a77d-bf50a48c1e69-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:45:32 crc kubenswrapper[4667]: I0131 04:45:32.320146 4667 generic.go:334] "Generic (PLEG): container finished" podID="f63268bd-d150-4fbc-a77d-bf50a48c1e69" containerID="1f54789909eef3287e9c397be8d8d6da279becd5559d7b300f1f76b332b5fd33" exitCode=0 Jan 31 04:45:32 crc kubenswrapper[4667]: I0131 04:45:32.320193 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z57jn" event={"ID":"f63268bd-d150-4fbc-a77d-bf50a48c1e69","Type":"ContainerDied","Data":"1f54789909eef3287e9c397be8d8d6da279becd5559d7b300f1f76b332b5fd33"} Jan 31 04:45:32 crc kubenswrapper[4667]: I0131 04:45:32.320225 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z57jn" event={"ID":"f63268bd-d150-4fbc-a77d-bf50a48c1e69","Type":"ContainerDied","Data":"93daba788c165eb2def797790bf390e80584151bb9c7f8cff0d73fb8825b2b60"} Jan 31 04:45:32 crc kubenswrapper[4667]: I0131 04:45:32.320243 4667 scope.go:117] "RemoveContainer" containerID="1f54789909eef3287e9c397be8d8d6da279becd5559d7b300f1f76b332b5fd33" Jan 31 04:45:32 crc kubenswrapper[4667]: I0131 04:45:32.320265 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z57jn" Jan 31 04:45:32 crc kubenswrapper[4667]: I0131 04:45:32.347161 4667 scope.go:117] "RemoveContainer" containerID="b42f879034013c6b57ac76143f96922b06d8b7164c44bec6868295daed87cc4e" Jan 31 04:45:32 crc kubenswrapper[4667]: I0131 04:45:32.369533 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z57jn"] Jan 31 04:45:32 crc kubenswrapper[4667]: I0131 04:45:32.393499 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z57jn"] Jan 31 04:45:32 crc kubenswrapper[4667]: I0131 04:45:32.397061 4667 scope.go:117] "RemoveContainer" containerID="9e393aed50c55b61a2687274140374ad5eea7a63e632565fd90b5406b0bb6a61" Jan 31 04:45:32 crc kubenswrapper[4667]: I0131 04:45:32.437243 4667 scope.go:117] "RemoveContainer" containerID="1f54789909eef3287e9c397be8d8d6da279becd5559d7b300f1f76b332b5fd33" Jan 31 04:45:32 crc kubenswrapper[4667]: E0131 04:45:32.437946 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f54789909eef3287e9c397be8d8d6da279becd5559d7b300f1f76b332b5fd33\": container with ID starting with 1f54789909eef3287e9c397be8d8d6da279becd5559d7b300f1f76b332b5fd33 not found: ID does not exist" containerID="1f54789909eef3287e9c397be8d8d6da279becd5559d7b300f1f76b332b5fd33" Jan 31 04:45:32 crc kubenswrapper[4667]: I0131 04:45:32.438002 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f54789909eef3287e9c397be8d8d6da279becd5559d7b300f1f76b332b5fd33"} err="failed to get container status \"1f54789909eef3287e9c397be8d8d6da279becd5559d7b300f1f76b332b5fd33\": rpc error: code = NotFound desc = could not find container \"1f54789909eef3287e9c397be8d8d6da279becd5559d7b300f1f76b332b5fd33\": container with ID starting with 1f54789909eef3287e9c397be8d8d6da279becd5559d7b300f1f76b332b5fd33 not found: ID does not exist" Jan 31 04:45:32 crc kubenswrapper[4667]: I0131 04:45:32.438040 4667 scope.go:117] "RemoveContainer" containerID="b42f879034013c6b57ac76143f96922b06d8b7164c44bec6868295daed87cc4e" Jan 31 04:45:32 crc kubenswrapper[4667]: E0131 04:45:32.438862 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b42f879034013c6b57ac76143f96922b06d8b7164c44bec6868295daed87cc4e\": container with ID starting with b42f879034013c6b57ac76143f96922b06d8b7164c44bec6868295daed87cc4e not found: ID does not exist" containerID="b42f879034013c6b57ac76143f96922b06d8b7164c44bec6868295daed87cc4e" Jan 31 04:45:32 crc kubenswrapper[4667]: I0131 04:45:32.438906 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b42f879034013c6b57ac76143f96922b06d8b7164c44bec6868295daed87cc4e"} err="failed to get container status \"b42f879034013c6b57ac76143f96922b06d8b7164c44bec6868295daed87cc4e\": rpc error: code = NotFound desc = could not find container \"b42f879034013c6b57ac76143f96922b06d8b7164c44bec6868295daed87cc4e\": container with ID starting with b42f879034013c6b57ac76143f96922b06d8b7164c44bec6868295daed87cc4e not found: ID does not exist" Jan 31 04:45:32 crc kubenswrapper[4667]: I0131 04:45:32.438952 4667 scope.go:117] "RemoveContainer" containerID="9e393aed50c55b61a2687274140374ad5eea7a63e632565fd90b5406b0bb6a61" Jan 31 04:45:32 crc kubenswrapper[4667]: E0131 04:45:32.439312 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e393aed50c55b61a2687274140374ad5eea7a63e632565fd90b5406b0bb6a61\": container with ID starting with 9e393aed50c55b61a2687274140374ad5eea7a63e632565fd90b5406b0bb6a61 not found: ID does not exist" containerID="9e393aed50c55b61a2687274140374ad5eea7a63e632565fd90b5406b0bb6a61" Jan 31 04:45:32 crc kubenswrapper[4667]: I0131 04:45:32.439335 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e393aed50c55b61a2687274140374ad5eea7a63e632565fd90b5406b0bb6a61"} err="failed to get container status \"9e393aed50c55b61a2687274140374ad5eea7a63e632565fd90b5406b0bb6a61\": rpc error: code = NotFound desc = could not find container \"9e393aed50c55b61a2687274140374ad5eea7a63e632565fd90b5406b0bb6a61\": container with ID starting with 9e393aed50c55b61a2687274140374ad5eea7a63e632565fd90b5406b0bb6a61 not found: ID does not exist" Jan 31 04:45:33 crc kubenswrapper[4667]: I0131 04:45:33.293415 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f63268bd-d150-4fbc-a77d-bf50a48c1e69" path="/var/lib/kubelet/pods/f63268bd-d150-4fbc-a77d-bf50a48c1e69/volumes" Jan 31 04:45:48 crc kubenswrapper[4667]: I0131 04:45:48.048201 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_caeab9432d2d716ddc2226985785c2befc1b94ceca4ba368762fb3e0362hwqr_770c15b8-5980-4cf9-91c7-11b2ded11b60/util/0.log" Jan 31 04:45:48 crc kubenswrapper[4667]: I0131 04:45:48.363361 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_caeab9432d2d716ddc2226985785c2befc1b94ceca4ba368762fb3e0362hwqr_770c15b8-5980-4cf9-91c7-11b2ded11b60/util/0.log" Jan 31 04:45:48 crc kubenswrapper[4667]: I0131 04:45:48.480871 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_caeab9432d2d716ddc2226985785c2befc1b94ceca4ba368762fb3e0362hwqr_770c15b8-5980-4cf9-91c7-11b2ded11b60/pull/0.log" Jan 31 04:45:48 crc kubenswrapper[4667]: I0131 04:45:48.535323 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_caeab9432d2d716ddc2226985785c2befc1b94ceca4ba368762fb3e0362hwqr_770c15b8-5980-4cf9-91c7-11b2ded11b60/pull/0.log" Jan 31 04:45:48 crc kubenswrapper[4667]: I0131 04:45:48.790224 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_caeab9432d2d716ddc2226985785c2befc1b94ceca4ba368762fb3e0362hwqr_770c15b8-5980-4cf9-91c7-11b2ded11b60/util/0.log" Jan 31 04:45:48 crc kubenswrapper[4667]: I0131 04:45:48.823245 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_caeab9432d2d716ddc2226985785c2befc1b94ceca4ba368762fb3e0362hwqr_770c15b8-5980-4cf9-91c7-11b2ded11b60/extract/0.log" Jan 31 04:45:48 crc kubenswrapper[4667]: I0131 04:45:48.838429 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_caeab9432d2d716ddc2226985785c2befc1b94ceca4ba368762fb3e0362hwqr_770c15b8-5980-4cf9-91c7-11b2ded11b60/pull/0.log" Jan 31 04:45:49 crc kubenswrapper[4667]: I0131 04:45:49.200206 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-pqxkg_508d212d-ccda-471c-94aa-96955a519e5a/manager/0.log" Jan 31 04:45:49 crc kubenswrapper[4667]: I0131 04:45:49.408413 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-2xtdf_5743730b-079b-4b07-a87b-932cd637e387/manager/0.log" Jan 31 04:45:49 crc kubenswrapper[4667]: I0131 04:45:49.646684 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-j629c_5280851f-6404-45ad-adc7-f41479cb7dc3/manager/0.log" Jan 31 04:45:49 crc kubenswrapper[4667]: I0131 04:45:49.883724 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-lzb8l_cfe9238d-7457-43f4-9933-cece048fc3fe/manager/0.log" Jan 31 04:45:50 crc kubenswrapper[4667]: I0131 04:45:50.155567 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-rfpnc_f26454ff-c920-4240-84dd-684272f0c0c8/manager/0.log" Jan 31 04:45:50 crc kubenswrapper[4667]: I0131 04:45:50.588821 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-6cf9z_5108f978-fa68-4add-9f97-5e02aec8c688/manager/0.log" Jan 31 04:45:50 crc kubenswrapper[4667]: I0131 04:45:50.780670 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-zswlt_47cf710a-e856-4094-8ef8-ff115631a236/manager/0.log" Jan 31 04:45:51 crc kubenswrapper[4667]: I0131 04:45:51.003127 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-hxstt_8a4eab04-25a1-4da9-8ee1-0243d4b69073/manager/0.log" Jan 31 04:45:51 crc kubenswrapper[4667]: I0131 04:45:51.221191 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-w6vd6_ed4fdc84-4fc5-4e5a-8959-b5ea977c9b56/manager/0.log" Jan 31 04:45:51 crc kubenswrapper[4667]: I0131 04:45:51.491365 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-wmmkk_f955fd59-24f1-42bb-81a8-c17e32274291/manager/0.log" Jan 31 04:45:51 crc kubenswrapper[4667]: I0131 04:45:51.895013 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-kf59p_1af3e556-130c-4530-89de-dd64852193c8/manager/0.log" Jan 31 04:45:52 crc kubenswrapper[4667]: I0131 04:45:52.008694 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-zq7nc_4dd3097d-038b-459b-be09-25e6a9c28379/manager/0.log" Jan 31 04:45:52 crc kubenswrapper[4667]: I0131 04:45:52.254956 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-2zj6j_645ed22c-c54e-495c-af4d-a63635f01dbc/manager/0.log" Jan 31 04:45:52 crc kubenswrapper[4667]: I0131 04:45:52.566455 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4d697dw_ad7389f5-4d9e-4a91-89b8-8f65e425fe83/manager/0.log" Jan 31 04:45:53 crc kubenswrapper[4667]: I0131 04:45:53.044743 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-77f687fc99-5bq4z_2b9c9fa2-4838-4c78-bcab-9bc723279049/operator/0.log" Jan 31 04:45:53 crc kubenswrapper[4667]: I0131 04:45:53.355296 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-zhdt9_28215a5c-c908-41e3-b138-1b26eaab9121/registry-server/0.log" Jan 31 04:45:53 crc kubenswrapper[4667]: I0131 04:45:53.681279 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-vpt7r_4955e603-5ae1-4c59-8f06-7e4c3f1cae70/manager/0.log" Jan 31 04:45:53 crc kubenswrapper[4667]: I0131 04:45:53.896493 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-lgk8x_aa7cd74d-218f-47a1-80f6-db8e475b1ba0/manager/0.log" Jan 31 04:45:54 crc kubenswrapper[4667]: I0131 04:45:54.004129 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-964b5_1f3ad0ee-dce4-4ed0-90f7-e2c195b6d099/operator/0.log" Jan 31 04:45:54 crc kubenswrapper[4667]: I0131 04:45:54.196086 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-54fc54694b-t88kx_75fa830b-0948-4104-874f-332cb2ea9de2/manager/0.log" Jan 31 04:45:54 crc kubenswrapper[4667]: I0131 04:45:54.334582 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-x46bg_fc224c93-299f-4f99-b16d-64ab47cb66a8/manager/0.log" Jan 31 04:45:54 crc kubenswrapper[4667]: I0131 04:45:54.635587 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-b4btm_7c71998a-5e4c-461c-96f9-3ff67b4619cd/manager/0.log" Jan 31 04:45:54 crc kubenswrapper[4667]: I0131 04:45:54.688718 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-fcd7f5fc5-pfnrd_5af1cf00-3340-481a-9312-cdd15cddbf5d/manager/0.log" Jan 31 04:45:54 crc kubenswrapper[4667]: I0131 04:45:54.741638 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-gzj6r_340a909d-7419-4721-be11-2c37a3a87022/manager/0.log" Jan 31 04:45:54 crc kubenswrapper[4667]: I0131 04:45:54.971952 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-fxzcm_675da051-c9cc-4817-9092-478b3d90d1bf/manager/0.log" Jan 31 04:46:00 crc kubenswrapper[4667]: I0131 04:46:00.255482 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nlzt2"] Jan 31 04:46:00 crc kubenswrapper[4667]: E0131 04:46:00.258348 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f63268bd-d150-4fbc-a77d-bf50a48c1e69" containerName="extract-utilities" Jan 31 04:46:00 crc kubenswrapper[4667]: I0131 04:46:00.258483 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="f63268bd-d150-4fbc-a77d-bf50a48c1e69" containerName="extract-utilities" Jan 31 04:46:00 crc kubenswrapper[4667]: E0131 04:46:00.258582 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f63268bd-d150-4fbc-a77d-bf50a48c1e69" containerName="registry-server" Jan 31 04:46:00 crc kubenswrapper[4667]: I0131 04:46:00.258670 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="f63268bd-d150-4fbc-a77d-bf50a48c1e69" containerName="registry-server" Jan 31 04:46:00 crc kubenswrapper[4667]: E0131 04:46:00.258780 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f63268bd-d150-4fbc-a77d-bf50a48c1e69" containerName="extract-content" Jan 31 04:46:00 crc kubenswrapper[4667]: I0131 04:46:00.258875 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="f63268bd-d150-4fbc-a77d-bf50a48c1e69" containerName="extract-content" Jan 31 04:46:00 crc kubenswrapper[4667]: I0131 04:46:00.259363 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="f63268bd-d150-4fbc-a77d-bf50a48c1e69" containerName="registry-server" Jan 31 04:46:00 crc kubenswrapper[4667]: I0131 04:46:00.264554 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nlzt2" Jan 31 04:46:00 crc kubenswrapper[4667]: I0131 04:46:00.287144 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nlzt2"] Jan 31 04:46:00 crc kubenswrapper[4667]: I0131 04:46:00.408727 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47f0e6d2-7560-4836-aa9a-e64249bec017-utilities\") pod \"certified-operators-nlzt2\" (UID: \"47f0e6d2-7560-4836-aa9a-e64249bec017\") " pod="openshift-marketplace/certified-operators-nlzt2" Jan 31 04:46:00 crc kubenswrapper[4667]: I0131 04:46:00.408960 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47f0e6d2-7560-4836-aa9a-e64249bec017-catalog-content\") pod \"certified-operators-nlzt2\" (UID: \"47f0e6d2-7560-4836-aa9a-e64249bec017\") " pod="openshift-marketplace/certified-operators-nlzt2" Jan 31 04:46:00 crc kubenswrapper[4667]: I0131 04:46:00.408986 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n92v4\" (UniqueName: \"kubernetes.io/projected/47f0e6d2-7560-4836-aa9a-e64249bec017-kube-api-access-n92v4\") pod \"certified-operators-nlzt2\" (UID: \"47f0e6d2-7560-4836-aa9a-e64249bec017\") " pod="openshift-marketplace/certified-operators-nlzt2" Jan 31 04:46:00 crc kubenswrapper[4667]: I0131 04:46:00.510923 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47f0e6d2-7560-4836-aa9a-e64249bec017-utilities\") pod \"certified-operators-nlzt2\" (UID: \"47f0e6d2-7560-4836-aa9a-e64249bec017\") " pod="openshift-marketplace/certified-operators-nlzt2" Jan 31 04:46:00 crc kubenswrapper[4667]: I0131 04:46:00.511135 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47f0e6d2-7560-4836-aa9a-e64249bec017-catalog-content\") pod \"certified-operators-nlzt2\" (UID: \"47f0e6d2-7560-4836-aa9a-e64249bec017\") " pod="openshift-marketplace/certified-operators-nlzt2" Jan 31 04:46:00 crc kubenswrapper[4667]: I0131 04:46:00.511161 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n92v4\" (UniqueName: \"kubernetes.io/projected/47f0e6d2-7560-4836-aa9a-e64249bec017-kube-api-access-n92v4\") pod \"certified-operators-nlzt2\" (UID: \"47f0e6d2-7560-4836-aa9a-e64249bec017\") " pod="openshift-marketplace/certified-operators-nlzt2" Jan 31 04:46:00 crc kubenswrapper[4667]: I0131 04:46:00.511528 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47f0e6d2-7560-4836-aa9a-e64249bec017-utilities\") pod \"certified-operators-nlzt2\" (UID: \"47f0e6d2-7560-4836-aa9a-e64249bec017\") " pod="openshift-marketplace/certified-operators-nlzt2" Jan 31 04:46:00 crc kubenswrapper[4667]: I0131 04:46:00.512225 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47f0e6d2-7560-4836-aa9a-e64249bec017-catalog-content\") pod \"certified-operators-nlzt2\" (UID: \"47f0e6d2-7560-4836-aa9a-e64249bec017\") " pod="openshift-marketplace/certified-operators-nlzt2" Jan 31 04:46:00 crc kubenswrapper[4667]: I0131 04:46:00.533730 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n92v4\" (UniqueName: \"kubernetes.io/projected/47f0e6d2-7560-4836-aa9a-e64249bec017-kube-api-access-n92v4\") pod \"certified-operators-nlzt2\" (UID: \"47f0e6d2-7560-4836-aa9a-e64249bec017\") " pod="openshift-marketplace/certified-operators-nlzt2" Jan 31 04:46:00 crc kubenswrapper[4667]: I0131 04:46:00.590478 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nlzt2" Jan 31 04:46:01 crc kubenswrapper[4667]: I0131 04:46:01.205034 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nlzt2"] Jan 31 04:46:01 crc kubenswrapper[4667]: I0131 04:46:01.679685 4667 generic.go:334] "Generic (PLEG): container finished" podID="47f0e6d2-7560-4836-aa9a-e64249bec017" containerID="fee5d1f7395f67e086e56a08f320a7d62139125b8bd5b46efe144e08466cf264" exitCode=0 Jan 31 04:46:01 crc kubenswrapper[4667]: I0131 04:46:01.679891 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nlzt2" event={"ID":"47f0e6d2-7560-4836-aa9a-e64249bec017","Type":"ContainerDied","Data":"fee5d1f7395f67e086e56a08f320a7d62139125b8bd5b46efe144e08466cf264"} Jan 31 04:46:01 crc kubenswrapper[4667]: I0131 04:46:01.680065 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nlzt2" event={"ID":"47f0e6d2-7560-4836-aa9a-e64249bec017","Type":"ContainerStarted","Data":"18b8b1c607f625cca0c3713f388e6690917b1663eb2eaad9e5c3e1060e0db60b"} Jan 31 04:46:02 crc kubenswrapper[4667]: I0131 04:46:02.697891 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nlzt2" event={"ID":"47f0e6d2-7560-4836-aa9a-e64249bec017","Type":"ContainerStarted","Data":"2dd0ae22902592f2ea80e8746aed6770545f6a1769541f00d9106052443048bc"} Jan 31 04:46:04 crc kubenswrapper[4667]: I0131 04:46:04.721310 4667 generic.go:334] "Generic (PLEG): container finished" podID="47f0e6d2-7560-4836-aa9a-e64249bec017" containerID="2dd0ae22902592f2ea80e8746aed6770545f6a1769541f00d9106052443048bc" exitCode=0 Jan 31 04:46:04 crc kubenswrapper[4667]: I0131 04:46:04.721532 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nlzt2" event={"ID":"47f0e6d2-7560-4836-aa9a-e64249bec017","Type":"ContainerDied","Data":"2dd0ae22902592f2ea80e8746aed6770545f6a1769541f00d9106052443048bc"} Jan 31 04:46:05 crc kubenswrapper[4667]: I0131 04:46:05.736090 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nlzt2" event={"ID":"47f0e6d2-7560-4836-aa9a-e64249bec017","Type":"ContainerStarted","Data":"f76e13726f1fa1e9e99808f20b73f7ee21363133d20ac963eff49e8b75b64c41"} Jan 31 04:46:05 crc kubenswrapper[4667]: I0131 04:46:05.769354 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nlzt2" podStartSLOduration=2.2859033 podStartE2EDuration="5.769333548s" podCreationTimestamp="2026-01-31 04:46:00 +0000 UTC" firstStartedPulling="2026-01-31 04:46:01.682303336 +0000 UTC m=+3485.198638635" lastFinishedPulling="2026-01-31 04:46:05.165733584 +0000 UTC m=+3488.682068883" observedRunningTime="2026-01-31 04:46:05.759705846 +0000 UTC m=+3489.276041145" watchObservedRunningTime="2026-01-31 04:46:05.769333548 +0000 UTC m=+3489.285668847" Jan 31 04:46:10 crc kubenswrapper[4667]: I0131 04:46:10.591196 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nlzt2" Jan 31 04:46:10 crc kubenswrapper[4667]: I0131 04:46:10.592934 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nlzt2" Jan 31 04:46:10 crc kubenswrapper[4667]: I0131 04:46:10.646087 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nlzt2" Jan 31 04:46:10 crc kubenswrapper[4667]: I0131 04:46:10.838806 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nlzt2" Jan 31 04:46:10 crc kubenswrapper[4667]: I0131 04:46:10.898083 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nlzt2"] Jan 31 04:46:12 crc kubenswrapper[4667]: I0131 04:46:12.806612 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nlzt2" podUID="47f0e6d2-7560-4836-aa9a-e64249bec017" containerName="registry-server" containerID="cri-o://f76e13726f1fa1e9e99808f20b73f7ee21363133d20ac963eff49e8b75b64c41" gracePeriod=2 Jan 31 04:46:13 crc kubenswrapper[4667]: I0131 04:46:13.336004 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nlzt2" Jan 31 04:46:13 crc kubenswrapper[4667]: I0131 04:46:13.443965 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47f0e6d2-7560-4836-aa9a-e64249bec017-utilities\") pod \"47f0e6d2-7560-4836-aa9a-e64249bec017\" (UID: \"47f0e6d2-7560-4836-aa9a-e64249bec017\") " Jan 31 04:46:13 crc kubenswrapper[4667]: I0131 04:46:13.444149 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n92v4\" (UniqueName: \"kubernetes.io/projected/47f0e6d2-7560-4836-aa9a-e64249bec017-kube-api-access-n92v4\") pod \"47f0e6d2-7560-4836-aa9a-e64249bec017\" (UID: \"47f0e6d2-7560-4836-aa9a-e64249bec017\") " Jan 31 04:46:13 crc kubenswrapper[4667]: I0131 04:46:13.444213 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47f0e6d2-7560-4836-aa9a-e64249bec017-catalog-content\") pod \"47f0e6d2-7560-4836-aa9a-e64249bec017\" (UID: \"47f0e6d2-7560-4836-aa9a-e64249bec017\") " Jan 31 04:46:13 crc kubenswrapper[4667]: I0131 04:46:13.445023 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47f0e6d2-7560-4836-aa9a-e64249bec017-utilities" (OuterVolumeSpecName: "utilities") pod "47f0e6d2-7560-4836-aa9a-e64249bec017" (UID: "47f0e6d2-7560-4836-aa9a-e64249bec017"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:46:13 crc kubenswrapper[4667]: I0131 04:46:13.452000 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47f0e6d2-7560-4836-aa9a-e64249bec017-kube-api-access-n92v4" (OuterVolumeSpecName: "kube-api-access-n92v4") pod "47f0e6d2-7560-4836-aa9a-e64249bec017" (UID: "47f0e6d2-7560-4836-aa9a-e64249bec017"). InnerVolumeSpecName "kube-api-access-n92v4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:46:13 crc kubenswrapper[4667]: I0131 04:46:13.507535 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47f0e6d2-7560-4836-aa9a-e64249bec017-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "47f0e6d2-7560-4836-aa9a-e64249bec017" (UID: "47f0e6d2-7560-4836-aa9a-e64249bec017"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:46:13 crc kubenswrapper[4667]: I0131 04:46:13.547490 4667 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47f0e6d2-7560-4836-aa9a-e64249bec017-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:46:13 crc kubenswrapper[4667]: I0131 04:46:13.547530 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n92v4\" (UniqueName: \"kubernetes.io/projected/47f0e6d2-7560-4836-aa9a-e64249bec017-kube-api-access-n92v4\") on node \"crc\" DevicePath \"\"" Jan 31 04:46:13 crc kubenswrapper[4667]: I0131 04:46:13.547541 4667 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47f0e6d2-7560-4836-aa9a-e64249bec017-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:46:13 crc kubenswrapper[4667]: I0131 04:46:13.822825 4667 generic.go:334] "Generic (PLEG): container finished" podID="47f0e6d2-7560-4836-aa9a-e64249bec017" containerID="f76e13726f1fa1e9e99808f20b73f7ee21363133d20ac963eff49e8b75b64c41" exitCode=0 Jan 31 04:46:13 crc kubenswrapper[4667]: I0131 04:46:13.823993 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nlzt2" event={"ID":"47f0e6d2-7560-4836-aa9a-e64249bec017","Type":"ContainerDied","Data":"f76e13726f1fa1e9e99808f20b73f7ee21363133d20ac963eff49e8b75b64c41"} Jan 31 04:46:13 crc kubenswrapper[4667]: I0131 04:46:13.824077 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nlzt2" event={"ID":"47f0e6d2-7560-4836-aa9a-e64249bec017","Type":"ContainerDied","Data":"18b8b1c607f625cca0c3713f388e6690917b1663eb2eaad9e5c3e1060e0db60b"} Jan 31 04:46:13 crc kubenswrapper[4667]: I0131 04:46:13.824114 4667 scope.go:117] "RemoveContainer" containerID="f76e13726f1fa1e9e99808f20b73f7ee21363133d20ac963eff49e8b75b64c41" Jan 31 04:46:13 crc kubenswrapper[4667]: I0131 04:46:13.824461 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nlzt2" Jan 31 04:46:13 crc kubenswrapper[4667]: I0131 04:46:13.858923 4667 scope.go:117] "RemoveContainer" containerID="2dd0ae22902592f2ea80e8746aed6770545f6a1769541f00d9106052443048bc" Jan 31 04:46:13 crc kubenswrapper[4667]: I0131 04:46:13.880119 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nlzt2"] Jan 31 04:46:13 crc kubenswrapper[4667]: I0131 04:46:13.892235 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nlzt2"] Jan 31 04:46:13 crc kubenswrapper[4667]: I0131 04:46:13.896922 4667 scope.go:117] "RemoveContainer" containerID="fee5d1f7395f67e086e56a08f320a7d62139125b8bd5b46efe144e08466cf264" Jan 31 04:46:13 crc kubenswrapper[4667]: I0131 04:46:13.942061 4667 scope.go:117] "RemoveContainer" containerID="f76e13726f1fa1e9e99808f20b73f7ee21363133d20ac963eff49e8b75b64c41" Jan 31 04:46:13 crc kubenswrapper[4667]: E0131 04:46:13.948500 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f76e13726f1fa1e9e99808f20b73f7ee21363133d20ac963eff49e8b75b64c41\": container with ID starting with f76e13726f1fa1e9e99808f20b73f7ee21363133d20ac963eff49e8b75b64c41 not found: ID does not exist" containerID="f76e13726f1fa1e9e99808f20b73f7ee21363133d20ac963eff49e8b75b64c41" Jan 31 04:46:13 crc kubenswrapper[4667]: I0131 04:46:13.948538 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f76e13726f1fa1e9e99808f20b73f7ee21363133d20ac963eff49e8b75b64c41"} err="failed to get container status \"f76e13726f1fa1e9e99808f20b73f7ee21363133d20ac963eff49e8b75b64c41\": rpc error: code = NotFound desc = could not find container \"f76e13726f1fa1e9e99808f20b73f7ee21363133d20ac963eff49e8b75b64c41\": container with ID starting with f76e13726f1fa1e9e99808f20b73f7ee21363133d20ac963eff49e8b75b64c41 not found: ID does not exist" Jan 31 04:46:13 crc kubenswrapper[4667]: I0131 04:46:13.948564 4667 scope.go:117] "RemoveContainer" containerID="2dd0ae22902592f2ea80e8746aed6770545f6a1769541f00d9106052443048bc" Jan 31 04:46:13 crc kubenswrapper[4667]: E0131 04:46:13.949065 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2dd0ae22902592f2ea80e8746aed6770545f6a1769541f00d9106052443048bc\": container with ID starting with 2dd0ae22902592f2ea80e8746aed6770545f6a1769541f00d9106052443048bc not found: ID does not exist" containerID="2dd0ae22902592f2ea80e8746aed6770545f6a1769541f00d9106052443048bc" Jan 31 04:46:13 crc kubenswrapper[4667]: I0131 04:46:13.949173 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dd0ae22902592f2ea80e8746aed6770545f6a1769541f00d9106052443048bc"} err="failed to get container status \"2dd0ae22902592f2ea80e8746aed6770545f6a1769541f00d9106052443048bc\": rpc error: code = NotFound desc = could not find container \"2dd0ae22902592f2ea80e8746aed6770545f6a1769541f00d9106052443048bc\": container with ID starting with 2dd0ae22902592f2ea80e8746aed6770545f6a1769541f00d9106052443048bc not found: ID does not exist" Jan 31 04:46:13 crc kubenswrapper[4667]: I0131 04:46:13.949212 4667 scope.go:117] "RemoveContainer" containerID="fee5d1f7395f67e086e56a08f320a7d62139125b8bd5b46efe144e08466cf264" Jan 31 04:46:13 crc kubenswrapper[4667]: E0131 04:46:13.949603 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fee5d1f7395f67e086e56a08f320a7d62139125b8bd5b46efe144e08466cf264\": container with ID starting with fee5d1f7395f67e086e56a08f320a7d62139125b8bd5b46efe144e08466cf264 not found: ID does not exist" containerID="fee5d1f7395f67e086e56a08f320a7d62139125b8bd5b46efe144e08466cf264" Jan 31 04:46:13 crc kubenswrapper[4667]: I0131 04:46:13.949628 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fee5d1f7395f67e086e56a08f320a7d62139125b8bd5b46efe144e08466cf264"} err="failed to get container status \"fee5d1f7395f67e086e56a08f320a7d62139125b8bd5b46efe144e08466cf264\": rpc error: code = NotFound desc = could not find container \"fee5d1f7395f67e086e56a08f320a7d62139125b8bd5b46efe144e08466cf264\": container with ID starting with fee5d1f7395f67e086e56a08f320a7d62139125b8bd5b46efe144e08466cf264 not found: ID does not exist" Jan 31 04:46:15 crc kubenswrapper[4667]: I0131 04:46:15.296535 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47f0e6d2-7560-4836-aa9a-e64249bec017" path="/var/lib/kubelet/pods/47f0e6d2-7560-4836-aa9a-e64249bec017/volumes" Jan 31 04:46:21 crc kubenswrapper[4667]: I0131 04:46:21.265556 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-stnvq_dbbace8c-06bb-4b50-a132-a681482dc9e5/control-plane-machine-set-operator/0.log" Jan 31 04:46:21 crc kubenswrapper[4667]: I0131 04:46:21.584226 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-zpjcj_83d090b3-311a-4b89-aa7d-de1ca0b237d6/kube-rbac-proxy/0.log" Jan 31 04:46:21 crc kubenswrapper[4667]: I0131 04:46:21.644917 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-zpjcj_83d090b3-311a-4b89-aa7d-de1ca0b237d6/machine-api-operator/0.log" Jan 31 04:46:39 crc kubenswrapper[4667]: I0131 04:46:39.809584 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-fhjxn_79f310bb-9fe3-4e37-9c80-b5c218823271/cert-manager-controller/0.log" Jan 31 04:46:40 crc kubenswrapper[4667]: I0131 04:46:40.166400 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-2vkbp_3a2e920c-f3b6-4c7d-aeae-f8d88ce0a3b1/cert-manager-webhook/0.log" Jan 31 04:46:40 crc kubenswrapper[4667]: I0131 04:46:40.203648 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-z8c84_06350efe-2c60-4ce9-a58d-034636cc57db/cert-manager-cainjector/0.log" Jan 31 04:46:58 crc kubenswrapper[4667]: I0131 04:46:58.221942 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-7qjzd_20571b84-83e2-494c-b690-9d7005ef51eb/nmstate-console-plugin/0.log" Jan 31 04:46:58 crc kubenswrapper[4667]: I0131 04:46:58.433409 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-lcflt_5713803d-a7eb-4197-bed0-8cfd7112add6/nmstate-handler/0.log" Jan 31 04:46:58 crc kubenswrapper[4667]: I0131 04:46:58.608557 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-4fm4q_848d059a-1bd2-4bec-ae9b-36352c162923/kube-rbac-proxy/0.log" Jan 31 04:46:58 crc kubenswrapper[4667]: I0131 04:46:58.796018 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-4fm4q_848d059a-1bd2-4bec-ae9b-36352c162923/nmstate-metrics/0.log" Jan 31 04:46:58 crc kubenswrapper[4667]: I0131 04:46:58.924409 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-cjp5c_d4bb0958-e09f-488c-9d40-747ddd8ed31a/nmstate-operator/0.log" Jan 31 04:46:59 crc kubenswrapper[4667]: I0131 04:46:59.140356 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-7nqzs_454649dc-76dc-45ea-8395-90c8e06d3e2f/nmstate-webhook/0.log" Jan 31 04:47:38 crc kubenswrapper[4667]: I0131 04:47:38.255189 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-np9hr_62ebc3a2-c4f8-4b5e-8fd7-1c462453ea77/kube-rbac-proxy/0.log" Jan 31 04:47:38 crc kubenswrapper[4667]: I0131 04:47:38.313223 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-np9hr_62ebc3a2-c4f8-4b5e-8fd7-1c462453ea77/controller/0.log" Jan 31 04:47:38 crc kubenswrapper[4667]: I0131 04:47:38.608276 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h45xh_1d6f4476-1b56-481c-b15e-ec4149642acc/cp-frr-files/0.log" Jan 31 04:47:38 crc kubenswrapper[4667]: I0131 04:47:38.903352 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h45xh_1d6f4476-1b56-481c-b15e-ec4149642acc/cp-metrics/0.log" Jan 31 04:47:38 crc kubenswrapper[4667]: I0131 04:47:38.942440 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h45xh_1d6f4476-1b56-481c-b15e-ec4149642acc/cp-reloader/0.log" Jan 31 04:47:38 crc kubenswrapper[4667]: I0131 04:47:38.945508 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h45xh_1d6f4476-1b56-481c-b15e-ec4149642acc/cp-frr-files/0.log" Jan 31 04:47:38 crc kubenswrapper[4667]: I0131 04:47:38.978476 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h45xh_1d6f4476-1b56-481c-b15e-ec4149642acc/cp-reloader/0.log" Jan 31 04:47:39 crc kubenswrapper[4667]: I0131 04:47:39.305433 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h45xh_1d6f4476-1b56-481c-b15e-ec4149642acc/cp-reloader/0.log" Jan 31 04:47:39 crc kubenswrapper[4667]: I0131 04:47:39.383340 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h45xh_1d6f4476-1b56-481c-b15e-ec4149642acc/cp-metrics/0.log" Jan 31 04:47:39 crc kubenswrapper[4667]: I0131 04:47:39.391608 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h45xh_1d6f4476-1b56-481c-b15e-ec4149642acc/cp-frr-files/0.log" Jan 31 04:47:39 crc kubenswrapper[4667]: I0131 04:47:39.432472 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h45xh_1d6f4476-1b56-481c-b15e-ec4149642acc/cp-metrics/0.log" Jan 31 04:47:39 crc kubenswrapper[4667]: I0131 04:47:39.704230 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h45xh_1d6f4476-1b56-481c-b15e-ec4149642acc/cp-frr-files/0.log" Jan 31 04:47:39 crc kubenswrapper[4667]: I0131 04:47:39.723113 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h45xh_1d6f4476-1b56-481c-b15e-ec4149642acc/controller/0.log" Jan 31 04:47:39 crc kubenswrapper[4667]: I0131 04:47:39.731147 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h45xh_1d6f4476-1b56-481c-b15e-ec4149642acc/cp-reloader/0.log" Jan 31 04:47:39 crc kubenswrapper[4667]: I0131 04:47:39.747718 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h45xh_1d6f4476-1b56-481c-b15e-ec4149642acc/cp-metrics/0.log" Jan 31 04:47:39 crc kubenswrapper[4667]: I0131 04:47:39.995216 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h45xh_1d6f4476-1b56-481c-b15e-ec4149642acc/kube-rbac-proxy/0.log" Jan 31 04:47:40 crc kubenswrapper[4667]: I0131 04:47:40.059755 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h45xh_1d6f4476-1b56-481c-b15e-ec4149642acc/kube-rbac-proxy-frr/0.log" Jan 31 04:47:40 crc kubenswrapper[4667]: I0131 04:47:40.091868 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h45xh_1d6f4476-1b56-481c-b15e-ec4149642acc/frr-metrics/0.log" Jan 31 04:47:40 crc kubenswrapper[4667]: I0131 04:47:40.264087 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h45xh_1d6f4476-1b56-481c-b15e-ec4149642acc/reloader/0.log" Jan 31 04:47:40 crc kubenswrapper[4667]: I0131 04:47:40.391221 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-7tjsc_bf8fd966-64cf-493a-b75c-2588e084afb8/frr-k8s-webhook-server/0.log" Jan 31 04:47:40 crc kubenswrapper[4667]: I0131 04:47:40.706017 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6fbb7fc476-m2zqb_22c596d0-b347-4dd0-ab61-7560ec9f5636/manager/0.log" Jan 31 04:47:41 crc kubenswrapper[4667]: I0131 04:47:41.046471 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-86d6b8c8bf-jddpp_a9fc0a54-a93e-4113-8b7c-25015ed1cb60/webhook-server/0.log" Jan 31 04:47:41 crc kubenswrapper[4667]: I0131 04:47:41.138591 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-tqnx9_a01baef7-dca0-4217-a1de-cbfcf6348664/kube-rbac-proxy/0.log" Jan 31 04:47:41 crc kubenswrapper[4667]: I0131 04:47:41.172251 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h45xh_1d6f4476-1b56-481c-b15e-ec4149642acc/frr/0.log" Jan 31 04:47:41 crc kubenswrapper[4667]: I0131 04:47:41.719326 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-tqnx9_a01baef7-dca0-4217-a1de-cbfcf6348664/speaker/0.log" Jan 31 04:47:45 crc kubenswrapper[4667]: I0131 04:47:45.704380 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:47:45 crc kubenswrapper[4667]: I0131 04:47:45.705315 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:47:57 crc kubenswrapper[4667]: I0131 04:47:57.420760 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc99ffw_65537ed9-39d5-40b0-82c9-a4b3d9dc6551/util/0.log" Jan 31 04:47:57 crc kubenswrapper[4667]: I0131 04:47:57.706492 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc99ffw_65537ed9-39d5-40b0-82c9-a4b3d9dc6551/pull/0.log" Jan 31 04:47:57 crc kubenswrapper[4667]: I0131 04:47:57.766355 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc99ffw_65537ed9-39d5-40b0-82c9-a4b3d9dc6551/util/0.log" Jan 31 04:47:57 crc kubenswrapper[4667]: I0131 04:47:57.782435 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc99ffw_65537ed9-39d5-40b0-82c9-a4b3d9dc6551/pull/0.log" Jan 31 04:47:57 crc kubenswrapper[4667]: I0131 04:47:57.972687 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc99ffw_65537ed9-39d5-40b0-82c9-a4b3d9dc6551/util/0.log" Jan 31 04:47:57 crc kubenswrapper[4667]: I0131 04:47:57.980156 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc99ffw_65537ed9-39d5-40b0-82c9-a4b3d9dc6551/pull/0.log" Jan 31 04:47:58 crc kubenswrapper[4667]: I0131 04:47:58.016320 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc99ffw_65537ed9-39d5-40b0-82c9-a4b3d9dc6551/extract/0.log" Jan 31 04:47:58 crc kubenswrapper[4667]: I0131 04:47:58.366193 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz_12187e5c-4ff4-4ab3-baea-3501646a5c68/util/0.log" Jan 31 04:47:58 crc kubenswrapper[4667]: I0131 04:47:58.415775 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz_12187e5c-4ff4-4ab3-baea-3501646a5c68/pull/0.log" Jan 31 04:47:58 crc kubenswrapper[4667]: I0131 04:47:58.475527 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz_12187e5c-4ff4-4ab3-baea-3501646a5c68/pull/0.log" Jan 31 04:47:58 crc kubenswrapper[4667]: I0131 04:47:58.512950 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz_12187e5c-4ff4-4ab3-baea-3501646a5c68/util/0.log" Jan 31 04:47:58 crc kubenswrapper[4667]: I0131 04:47:58.634752 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz_12187e5c-4ff4-4ab3-baea-3501646a5c68/util/0.log" Jan 31 04:47:58 crc kubenswrapper[4667]: I0131 04:47:58.676481 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz_12187e5c-4ff4-4ab3-baea-3501646a5c68/pull/0.log" Jan 31 04:47:58 crc kubenswrapper[4667]: I0131 04:47:58.735288 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz_12187e5c-4ff4-4ab3-baea-3501646a5c68/extract/0.log" Jan 31 04:47:58 crc kubenswrapper[4667]: I0131 04:47:58.867597 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8kcsw_6721fd64-d815-4fa7-8332-76eebcfad816/extract-utilities/0.log" Jan 31 04:47:59 crc kubenswrapper[4667]: I0131 04:47:59.170887 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8kcsw_6721fd64-d815-4fa7-8332-76eebcfad816/extract-content/0.log" Jan 31 04:47:59 crc kubenswrapper[4667]: I0131 04:47:59.210824 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8kcsw_6721fd64-d815-4fa7-8332-76eebcfad816/extract-content/0.log" Jan 31 04:47:59 crc kubenswrapper[4667]: I0131 04:47:59.214694 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8kcsw_6721fd64-d815-4fa7-8332-76eebcfad816/extract-utilities/0.log" Jan 31 04:47:59 crc kubenswrapper[4667]: I0131 04:47:59.376370 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8kcsw_6721fd64-d815-4fa7-8332-76eebcfad816/extract-utilities/0.log" Jan 31 04:47:59 crc kubenswrapper[4667]: I0131 04:47:59.390598 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8kcsw_6721fd64-d815-4fa7-8332-76eebcfad816/extract-content/0.log" Jan 31 04:47:59 crc kubenswrapper[4667]: I0131 04:47:59.661944 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qnlkn_b321312d-8d4b-4547-98c2-e3226cfb5dc5/extract-utilities/0.log" Jan 31 04:47:59 crc kubenswrapper[4667]: I0131 04:47:59.893872 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8kcsw_6721fd64-d815-4fa7-8332-76eebcfad816/registry-server/0.log" Jan 31 04:47:59 crc kubenswrapper[4667]: I0131 04:47:59.949466 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qnlkn_b321312d-8d4b-4547-98c2-e3226cfb5dc5/extract-utilities/0.log" Jan 31 04:48:00 crc kubenswrapper[4667]: I0131 04:48:00.007551 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qnlkn_b321312d-8d4b-4547-98c2-e3226cfb5dc5/extract-content/0.log" Jan 31 04:48:00 crc kubenswrapper[4667]: I0131 04:48:00.008727 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qnlkn_b321312d-8d4b-4547-98c2-e3226cfb5dc5/extract-content/0.log" Jan 31 04:48:00 crc kubenswrapper[4667]: I0131 04:48:00.345591 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qnlkn_b321312d-8d4b-4547-98c2-e3226cfb5dc5/extract-utilities/0.log" Jan 31 04:48:00 crc kubenswrapper[4667]: I0131 04:48:00.522362 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qnlkn_b321312d-8d4b-4547-98c2-e3226cfb5dc5/extract-content/0.log" Jan 31 04:48:00 crc kubenswrapper[4667]: I0131 04:48:00.745674 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-cq68x_eca662bd-5da4-45dd-9d55-714a74234cec/marketplace-operator/0.log" Jan 31 04:48:00 crc kubenswrapper[4667]: I0131 04:48:00.771119 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qnlkn_b321312d-8d4b-4547-98c2-e3226cfb5dc5/registry-server/0.log" Jan 31 04:48:00 crc kubenswrapper[4667]: I0131 04:48:00.842756 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4nf68_88ba3bd3-095c-4d75-b2c7-fa72d74704ef/extract-utilities/0.log" Jan 31 04:48:01 crc kubenswrapper[4667]: I0131 04:48:01.041031 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4nf68_88ba3bd3-095c-4d75-b2c7-fa72d74704ef/extract-content/0.log" Jan 31 04:48:01 crc kubenswrapper[4667]: I0131 04:48:01.078121 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4nf68_88ba3bd3-095c-4d75-b2c7-fa72d74704ef/extract-utilities/0.log" Jan 31 04:48:01 crc kubenswrapper[4667]: I0131 04:48:01.140127 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4nf68_88ba3bd3-095c-4d75-b2c7-fa72d74704ef/extract-content/0.log" Jan 31 04:48:01 crc kubenswrapper[4667]: I0131 04:48:01.282600 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4nf68_88ba3bd3-095c-4d75-b2c7-fa72d74704ef/extract-content/0.log" Jan 31 04:48:01 crc kubenswrapper[4667]: I0131 04:48:01.338259 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4nf68_88ba3bd3-095c-4d75-b2c7-fa72d74704ef/extract-utilities/0.log" Jan 31 04:48:01 crc kubenswrapper[4667]: I0131 04:48:01.432741 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4nf68_88ba3bd3-095c-4d75-b2c7-fa72d74704ef/registry-server/0.log" Jan 31 04:48:01 crc kubenswrapper[4667]: I0131 04:48:01.571923 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b72vl_7c3d3bea-fee8-4619-8daf-bef3da273e55/extract-utilities/0.log" Jan 31 04:48:01 crc kubenswrapper[4667]: I0131 04:48:01.812124 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b72vl_7c3d3bea-fee8-4619-8daf-bef3da273e55/extract-utilities/0.log" Jan 31 04:48:01 crc kubenswrapper[4667]: I0131 04:48:01.856112 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b72vl_7c3d3bea-fee8-4619-8daf-bef3da273e55/extract-content/0.log" Jan 31 04:48:01 crc kubenswrapper[4667]: I0131 04:48:01.856180 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b72vl_7c3d3bea-fee8-4619-8daf-bef3da273e55/extract-content/0.log" Jan 31 04:48:02 crc kubenswrapper[4667]: I0131 04:48:02.443188 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b72vl_7c3d3bea-fee8-4619-8daf-bef3da273e55/extract-content/0.log" Jan 31 04:48:02 crc kubenswrapper[4667]: I0131 04:48:02.567440 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b72vl_7c3d3bea-fee8-4619-8daf-bef3da273e55/extract-utilities/0.log" Jan 31 04:48:02 crc kubenswrapper[4667]: I0131 04:48:02.865009 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b72vl_7c3d3bea-fee8-4619-8daf-bef3da273e55/registry-server/0.log" Jan 31 04:48:15 crc kubenswrapper[4667]: I0131 04:48:15.705026 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:48:15 crc kubenswrapper[4667]: I0131 04:48:15.706060 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:48:22 crc kubenswrapper[4667]: E0131 04:48:22.538551 4667 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.111:36060->38.102.83.111:37867: write tcp 38.102.83.111:36060->38.102.83.111:37867: write: broken pipe Jan 31 04:48:27 crc kubenswrapper[4667]: E0131 04:48:27.039383 4667 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.111:36146->38.102.83.111:37867: write tcp 38.102.83.111:36146->38.102.83.111:37867: write: broken pipe Jan 31 04:48:45 crc kubenswrapper[4667]: I0131 04:48:45.704929 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:48:45 crc kubenswrapper[4667]: I0131 04:48:45.705811 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:48:45 crc kubenswrapper[4667]: I0131 04:48:45.705897 4667 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" Jan 31 04:48:45 crc kubenswrapper[4667]: I0131 04:48:45.707048 4667 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e341e89dc7f476e223c40775a5e4d587a91f2ee93cf266bd05345986bb0bfd8b"} pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 04:48:45 crc kubenswrapper[4667]: I0131 04:48:45.707131 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" containerID="cri-o://e341e89dc7f476e223c40775a5e4d587a91f2ee93cf266bd05345986bb0bfd8b" gracePeriod=600 Jan 31 04:48:46 crc kubenswrapper[4667]: I0131 04:48:46.465811 4667 generic.go:334] "Generic (PLEG): container finished" podID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerID="e341e89dc7f476e223c40775a5e4d587a91f2ee93cf266bd05345986bb0bfd8b" exitCode=0 Jan 31 04:48:46 crc kubenswrapper[4667]: I0131 04:48:46.465979 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" event={"ID":"b103bbd2-fb5d-4b2a-8b01-c32f699757df","Type":"ContainerDied","Data":"e341e89dc7f476e223c40775a5e4d587a91f2ee93cf266bd05345986bb0bfd8b"} Jan 31 04:48:46 crc kubenswrapper[4667]: I0131 04:48:46.466760 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" event={"ID":"b103bbd2-fb5d-4b2a-8b01-c32f699757df","Type":"ContainerStarted","Data":"3e5f360efff2cb2fbf8b3bd6a7305f45746603d4236d552d6b00acba8bf03353"} Jan 31 04:48:46 crc kubenswrapper[4667]: I0131 04:48:46.466794 4667 scope.go:117] "RemoveContainer" containerID="4a024f5baac5fed9fbfd2275beffa02329fe0d56f2f10277fb0cd58a753b185a" Jan 31 04:50:08 crc kubenswrapper[4667]: I0131 04:50:08.143755 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gwzj2"] Jan 31 04:50:08 crc kubenswrapper[4667]: E0131 04:50:08.145145 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47f0e6d2-7560-4836-aa9a-e64249bec017" containerName="extract-content" Jan 31 04:50:08 crc kubenswrapper[4667]: I0131 04:50:08.145163 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="47f0e6d2-7560-4836-aa9a-e64249bec017" containerName="extract-content" Jan 31 04:50:08 crc kubenswrapper[4667]: E0131 04:50:08.145182 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47f0e6d2-7560-4836-aa9a-e64249bec017" containerName="extract-utilities" Jan 31 04:50:08 crc kubenswrapper[4667]: I0131 04:50:08.145193 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="47f0e6d2-7560-4836-aa9a-e64249bec017" containerName="extract-utilities" Jan 31 04:50:08 crc kubenswrapper[4667]: E0131 04:50:08.145241 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47f0e6d2-7560-4836-aa9a-e64249bec017" containerName="registry-server" Jan 31 04:50:08 crc kubenswrapper[4667]: I0131 04:50:08.145253 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="47f0e6d2-7560-4836-aa9a-e64249bec017" containerName="registry-server" Jan 31 04:50:08 crc kubenswrapper[4667]: I0131 04:50:08.145512 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="47f0e6d2-7560-4836-aa9a-e64249bec017" containerName="registry-server" Jan 31 04:50:08 crc kubenswrapper[4667]: I0131 04:50:08.147408 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gwzj2" Jan 31 04:50:08 crc kubenswrapper[4667]: I0131 04:50:08.169819 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gwzj2"] Jan 31 04:50:08 crc kubenswrapper[4667]: I0131 04:50:08.204263 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f182b843-409b-4eff-b89d-fb183f39c238-utilities\") pod \"redhat-marketplace-gwzj2\" (UID: \"f182b843-409b-4eff-b89d-fb183f39c238\") " pod="openshift-marketplace/redhat-marketplace-gwzj2" Jan 31 04:50:08 crc kubenswrapper[4667]: I0131 04:50:08.204372 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x88h9\" (UniqueName: \"kubernetes.io/projected/f182b843-409b-4eff-b89d-fb183f39c238-kube-api-access-x88h9\") pod \"redhat-marketplace-gwzj2\" (UID: \"f182b843-409b-4eff-b89d-fb183f39c238\") " pod="openshift-marketplace/redhat-marketplace-gwzj2" Jan 31 04:50:08 crc kubenswrapper[4667]: I0131 04:50:08.204452 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f182b843-409b-4eff-b89d-fb183f39c238-catalog-content\") pod \"redhat-marketplace-gwzj2\" (UID: \"f182b843-409b-4eff-b89d-fb183f39c238\") " pod="openshift-marketplace/redhat-marketplace-gwzj2" Jan 31 04:50:08 crc kubenswrapper[4667]: I0131 04:50:08.306861 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x88h9\" (UniqueName: \"kubernetes.io/projected/f182b843-409b-4eff-b89d-fb183f39c238-kube-api-access-x88h9\") pod \"redhat-marketplace-gwzj2\" (UID: \"f182b843-409b-4eff-b89d-fb183f39c238\") " pod="openshift-marketplace/redhat-marketplace-gwzj2" Jan 31 04:50:08 crc kubenswrapper[4667]: I0131 04:50:08.307242 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f182b843-409b-4eff-b89d-fb183f39c238-catalog-content\") pod \"redhat-marketplace-gwzj2\" (UID: \"f182b843-409b-4eff-b89d-fb183f39c238\") " pod="openshift-marketplace/redhat-marketplace-gwzj2" Jan 31 04:50:08 crc kubenswrapper[4667]: I0131 04:50:08.307672 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f182b843-409b-4eff-b89d-fb183f39c238-utilities\") pod \"redhat-marketplace-gwzj2\" (UID: \"f182b843-409b-4eff-b89d-fb183f39c238\") " pod="openshift-marketplace/redhat-marketplace-gwzj2" Jan 31 04:50:08 crc kubenswrapper[4667]: I0131 04:50:08.308301 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f182b843-409b-4eff-b89d-fb183f39c238-catalog-content\") pod \"redhat-marketplace-gwzj2\" (UID: \"f182b843-409b-4eff-b89d-fb183f39c238\") " pod="openshift-marketplace/redhat-marketplace-gwzj2" Jan 31 04:50:08 crc kubenswrapper[4667]: I0131 04:50:08.308373 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f182b843-409b-4eff-b89d-fb183f39c238-utilities\") pod \"redhat-marketplace-gwzj2\" (UID: \"f182b843-409b-4eff-b89d-fb183f39c238\") " pod="openshift-marketplace/redhat-marketplace-gwzj2" Jan 31 04:50:08 crc kubenswrapper[4667]: I0131 04:50:08.338362 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x88h9\" (UniqueName: \"kubernetes.io/projected/f182b843-409b-4eff-b89d-fb183f39c238-kube-api-access-x88h9\") pod \"redhat-marketplace-gwzj2\" (UID: \"f182b843-409b-4eff-b89d-fb183f39c238\") " pod="openshift-marketplace/redhat-marketplace-gwzj2" Jan 31 04:50:08 crc kubenswrapper[4667]: I0131 04:50:08.472284 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gwzj2" Jan 31 04:50:09 crc kubenswrapper[4667]: I0131 04:50:09.007202 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gwzj2"] Jan 31 04:50:09 crc kubenswrapper[4667]: I0131 04:50:09.571289 4667 generic.go:334] "Generic (PLEG): container finished" podID="f182b843-409b-4eff-b89d-fb183f39c238" containerID="c0c0e1a593b3de5f24ba5772996ad0a76e046e312edf0f273420977ccc801492" exitCode=0 Jan 31 04:50:09 crc kubenswrapper[4667]: I0131 04:50:09.571529 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gwzj2" event={"ID":"f182b843-409b-4eff-b89d-fb183f39c238","Type":"ContainerDied","Data":"c0c0e1a593b3de5f24ba5772996ad0a76e046e312edf0f273420977ccc801492"} Jan 31 04:50:09 crc kubenswrapper[4667]: I0131 04:50:09.571871 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gwzj2" event={"ID":"f182b843-409b-4eff-b89d-fb183f39c238","Type":"ContainerStarted","Data":"dbaea06bfdb0ce273aba39b0e8d8856421b2ff8fce15cbe636d048df50d080fe"} Jan 31 04:50:10 crc kubenswrapper[4667]: I0131 04:50:10.581336 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gwzj2" event={"ID":"f182b843-409b-4eff-b89d-fb183f39c238","Type":"ContainerStarted","Data":"cd0020627daae1b4f6d6550d072c791d45aba1842520b3ea7056c66ede9d29e6"} Jan 31 04:50:11 crc kubenswrapper[4667]: I0131 04:50:11.601422 4667 generic.go:334] "Generic (PLEG): container finished" podID="f182b843-409b-4eff-b89d-fb183f39c238" containerID="cd0020627daae1b4f6d6550d072c791d45aba1842520b3ea7056c66ede9d29e6" exitCode=0 Jan 31 04:50:11 crc kubenswrapper[4667]: I0131 04:50:11.601486 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gwzj2" event={"ID":"f182b843-409b-4eff-b89d-fb183f39c238","Type":"ContainerDied","Data":"cd0020627daae1b4f6d6550d072c791d45aba1842520b3ea7056c66ede9d29e6"} Jan 31 04:50:11 crc kubenswrapper[4667]: I0131 04:50:11.607800 4667 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 04:50:12 crc kubenswrapper[4667]: I0131 04:50:12.613929 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gwzj2" event={"ID":"f182b843-409b-4eff-b89d-fb183f39c238","Type":"ContainerStarted","Data":"e1ab386b7267098ee91e53af1bd70460e5ddfe44b2314427caed4756def243ea"} Jan 31 04:50:12 crc kubenswrapper[4667]: I0131 04:50:12.653544 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gwzj2" podStartSLOduration=2.114137613 podStartE2EDuration="4.653501569s" podCreationTimestamp="2026-01-31 04:50:08 +0000 UTC" firstStartedPulling="2026-01-31 04:50:09.576010117 +0000 UTC m=+3733.092345416" lastFinishedPulling="2026-01-31 04:50:12.115374063 +0000 UTC m=+3735.631709372" observedRunningTime="2026-01-31 04:50:12.63754398 +0000 UTC m=+3736.153879319" watchObservedRunningTime="2026-01-31 04:50:12.653501569 +0000 UTC m=+3736.169836918" Jan 31 04:50:15 crc kubenswrapper[4667]: I0131 04:50:15.666930 4667 generic.go:334] "Generic (PLEG): container finished" podID="e85995d0-41be-46b4-bb54-bc7e234abbaa" containerID="67f992d856b9b986303597735ca1c1c154e685d36aa7a3a838b3f171fa1de614" exitCode=0 Jan 31 04:50:15 crc kubenswrapper[4667]: I0131 04:50:15.667046 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4kjf7/must-gather-cc9zj" event={"ID":"e85995d0-41be-46b4-bb54-bc7e234abbaa","Type":"ContainerDied","Data":"67f992d856b9b986303597735ca1c1c154e685d36aa7a3a838b3f171fa1de614"} Jan 31 04:50:15 crc kubenswrapper[4667]: I0131 04:50:15.668513 4667 scope.go:117] "RemoveContainer" containerID="67f992d856b9b986303597735ca1c1c154e685d36aa7a3a838b3f171fa1de614" Jan 31 04:50:16 crc kubenswrapper[4667]: I0131 04:50:16.293836 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4kjf7_must-gather-cc9zj_e85995d0-41be-46b4-bb54-bc7e234abbaa/gather/0.log" Jan 31 04:50:18 crc kubenswrapper[4667]: I0131 04:50:18.472749 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gwzj2" Jan 31 04:50:18 crc kubenswrapper[4667]: I0131 04:50:18.480297 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gwzj2" Jan 31 04:50:18 crc kubenswrapper[4667]: I0131 04:50:18.538622 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gwzj2" Jan 31 04:50:18 crc kubenswrapper[4667]: I0131 04:50:18.766729 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gwzj2" Jan 31 04:50:18 crc kubenswrapper[4667]: I0131 04:50:18.825045 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gwzj2"] Jan 31 04:50:20 crc kubenswrapper[4667]: I0131 04:50:20.737409 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gwzj2" podUID="f182b843-409b-4eff-b89d-fb183f39c238" containerName="registry-server" containerID="cri-o://e1ab386b7267098ee91e53af1bd70460e5ddfe44b2314427caed4756def243ea" gracePeriod=2 Jan 31 04:50:21 crc kubenswrapper[4667]: I0131 04:50:21.236884 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gwzj2" Jan 31 04:50:21 crc kubenswrapper[4667]: I0131 04:50:21.344056 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f182b843-409b-4eff-b89d-fb183f39c238-utilities\") pod \"f182b843-409b-4eff-b89d-fb183f39c238\" (UID: \"f182b843-409b-4eff-b89d-fb183f39c238\") " Jan 31 04:50:21 crc kubenswrapper[4667]: I0131 04:50:21.344128 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x88h9\" (UniqueName: \"kubernetes.io/projected/f182b843-409b-4eff-b89d-fb183f39c238-kube-api-access-x88h9\") pod \"f182b843-409b-4eff-b89d-fb183f39c238\" (UID: \"f182b843-409b-4eff-b89d-fb183f39c238\") " Jan 31 04:50:21 crc kubenswrapper[4667]: I0131 04:50:21.344182 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f182b843-409b-4eff-b89d-fb183f39c238-catalog-content\") pod \"f182b843-409b-4eff-b89d-fb183f39c238\" (UID: \"f182b843-409b-4eff-b89d-fb183f39c238\") " Jan 31 04:50:21 crc kubenswrapper[4667]: I0131 04:50:21.345533 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f182b843-409b-4eff-b89d-fb183f39c238-utilities" (OuterVolumeSpecName: "utilities") pod "f182b843-409b-4eff-b89d-fb183f39c238" (UID: "f182b843-409b-4eff-b89d-fb183f39c238"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:50:21 crc kubenswrapper[4667]: I0131 04:50:21.350396 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f182b843-409b-4eff-b89d-fb183f39c238-kube-api-access-x88h9" (OuterVolumeSpecName: "kube-api-access-x88h9") pod "f182b843-409b-4eff-b89d-fb183f39c238" (UID: "f182b843-409b-4eff-b89d-fb183f39c238"). InnerVolumeSpecName "kube-api-access-x88h9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:50:21 crc kubenswrapper[4667]: I0131 04:50:21.381800 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f182b843-409b-4eff-b89d-fb183f39c238-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f182b843-409b-4eff-b89d-fb183f39c238" (UID: "f182b843-409b-4eff-b89d-fb183f39c238"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:50:21 crc kubenswrapper[4667]: I0131 04:50:21.447684 4667 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f182b843-409b-4eff-b89d-fb183f39c238-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:50:21 crc kubenswrapper[4667]: I0131 04:50:21.448090 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x88h9\" (UniqueName: \"kubernetes.io/projected/f182b843-409b-4eff-b89d-fb183f39c238-kube-api-access-x88h9\") on node \"crc\" DevicePath \"\"" Jan 31 04:50:21 crc kubenswrapper[4667]: I0131 04:50:21.448187 4667 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f182b843-409b-4eff-b89d-fb183f39c238-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:50:21 crc kubenswrapper[4667]: I0131 04:50:21.755082 4667 generic.go:334] "Generic (PLEG): container finished" podID="f182b843-409b-4eff-b89d-fb183f39c238" containerID="e1ab386b7267098ee91e53af1bd70460e5ddfe44b2314427caed4756def243ea" exitCode=0 Jan 31 04:50:21 crc kubenswrapper[4667]: I0131 04:50:21.755139 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gwzj2" event={"ID":"f182b843-409b-4eff-b89d-fb183f39c238","Type":"ContainerDied","Data":"e1ab386b7267098ee91e53af1bd70460e5ddfe44b2314427caed4756def243ea"} Jan 31 04:50:21 crc kubenswrapper[4667]: I0131 04:50:21.755179 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gwzj2" event={"ID":"f182b843-409b-4eff-b89d-fb183f39c238","Type":"ContainerDied","Data":"dbaea06bfdb0ce273aba39b0e8d8856421b2ff8fce15cbe636d048df50d080fe"} Jan 31 04:50:21 crc kubenswrapper[4667]: I0131 04:50:21.755202 4667 scope.go:117] "RemoveContainer" containerID="e1ab386b7267098ee91e53af1bd70460e5ddfe44b2314427caed4756def243ea" Jan 31 04:50:21 crc kubenswrapper[4667]: I0131 04:50:21.755257 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gwzj2" Jan 31 04:50:21 crc kubenswrapper[4667]: I0131 04:50:21.799798 4667 scope.go:117] "RemoveContainer" containerID="cd0020627daae1b4f6d6550d072c791d45aba1842520b3ea7056c66ede9d29e6" Jan 31 04:50:21 crc kubenswrapper[4667]: I0131 04:50:21.847618 4667 scope.go:117] "RemoveContainer" containerID="c0c0e1a593b3de5f24ba5772996ad0a76e046e312edf0f273420977ccc801492" Jan 31 04:50:21 crc kubenswrapper[4667]: I0131 04:50:21.864549 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gwzj2"] Jan 31 04:50:21 crc kubenswrapper[4667]: I0131 04:50:21.893650 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gwzj2"] Jan 31 04:50:21 crc kubenswrapper[4667]: I0131 04:50:21.897052 4667 scope.go:117] "RemoveContainer" containerID="e1ab386b7267098ee91e53af1bd70460e5ddfe44b2314427caed4756def243ea" Jan 31 04:50:21 crc kubenswrapper[4667]: E0131 04:50:21.897712 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1ab386b7267098ee91e53af1bd70460e5ddfe44b2314427caed4756def243ea\": container with ID starting with e1ab386b7267098ee91e53af1bd70460e5ddfe44b2314427caed4756def243ea not found: ID does not exist" containerID="e1ab386b7267098ee91e53af1bd70460e5ddfe44b2314427caed4756def243ea" Jan 31 04:50:21 crc kubenswrapper[4667]: I0131 04:50:21.897750 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1ab386b7267098ee91e53af1bd70460e5ddfe44b2314427caed4756def243ea"} err="failed to get container status \"e1ab386b7267098ee91e53af1bd70460e5ddfe44b2314427caed4756def243ea\": rpc error: code = NotFound desc = could not find container \"e1ab386b7267098ee91e53af1bd70460e5ddfe44b2314427caed4756def243ea\": container with ID starting with e1ab386b7267098ee91e53af1bd70460e5ddfe44b2314427caed4756def243ea not found: ID does not exist" Jan 31 04:50:21 crc kubenswrapper[4667]: I0131 04:50:21.897799 4667 scope.go:117] "RemoveContainer" containerID="cd0020627daae1b4f6d6550d072c791d45aba1842520b3ea7056c66ede9d29e6" Jan 31 04:50:21 crc kubenswrapper[4667]: E0131 04:50:21.898324 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd0020627daae1b4f6d6550d072c791d45aba1842520b3ea7056c66ede9d29e6\": container with ID starting with cd0020627daae1b4f6d6550d072c791d45aba1842520b3ea7056c66ede9d29e6 not found: ID does not exist" containerID="cd0020627daae1b4f6d6550d072c791d45aba1842520b3ea7056c66ede9d29e6" Jan 31 04:50:21 crc kubenswrapper[4667]: I0131 04:50:21.898349 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd0020627daae1b4f6d6550d072c791d45aba1842520b3ea7056c66ede9d29e6"} err="failed to get container status \"cd0020627daae1b4f6d6550d072c791d45aba1842520b3ea7056c66ede9d29e6\": rpc error: code = NotFound desc = could not find container \"cd0020627daae1b4f6d6550d072c791d45aba1842520b3ea7056c66ede9d29e6\": container with ID starting with cd0020627daae1b4f6d6550d072c791d45aba1842520b3ea7056c66ede9d29e6 not found: ID does not exist" Jan 31 04:50:21 crc kubenswrapper[4667]: I0131 04:50:21.898365 4667 scope.go:117] "RemoveContainer" containerID="c0c0e1a593b3de5f24ba5772996ad0a76e046e312edf0f273420977ccc801492" Jan 31 04:50:21 crc kubenswrapper[4667]: E0131 04:50:21.898552 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0c0e1a593b3de5f24ba5772996ad0a76e046e312edf0f273420977ccc801492\": container with ID starting with c0c0e1a593b3de5f24ba5772996ad0a76e046e312edf0f273420977ccc801492 not found: ID does not exist" containerID="c0c0e1a593b3de5f24ba5772996ad0a76e046e312edf0f273420977ccc801492" Jan 31 04:50:21 crc kubenswrapper[4667]: I0131 04:50:21.898572 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0c0e1a593b3de5f24ba5772996ad0a76e046e312edf0f273420977ccc801492"} err="failed to get container status \"c0c0e1a593b3de5f24ba5772996ad0a76e046e312edf0f273420977ccc801492\": rpc error: code = NotFound desc = could not find container \"c0c0e1a593b3de5f24ba5772996ad0a76e046e312edf0f273420977ccc801492\": container with ID starting with c0c0e1a593b3de5f24ba5772996ad0a76e046e312edf0f273420977ccc801492 not found: ID does not exist" Jan 31 04:50:23 crc kubenswrapper[4667]: I0131 04:50:23.307863 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f182b843-409b-4eff-b89d-fb183f39c238" path="/var/lib/kubelet/pods/f182b843-409b-4eff-b89d-fb183f39c238/volumes" Jan 31 04:50:25 crc kubenswrapper[4667]: I0131 04:50:25.186051 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4kjf7/must-gather-cc9zj"] Jan 31 04:50:25 crc kubenswrapper[4667]: I0131 04:50:25.186913 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-4kjf7/must-gather-cc9zj" podUID="e85995d0-41be-46b4-bb54-bc7e234abbaa" containerName="copy" containerID="cri-o://7638ae809d4c51cf46f318e5ec87f088ad81221328e9dd6f3e7f8fc8b25d621f" gracePeriod=2 Jan 31 04:50:25 crc kubenswrapper[4667]: I0131 04:50:25.201267 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4kjf7/must-gather-cc9zj"] Jan 31 04:50:25 crc kubenswrapper[4667]: I0131 04:50:25.673905 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4kjf7_must-gather-cc9zj_e85995d0-41be-46b4-bb54-bc7e234abbaa/copy/0.log" Jan 31 04:50:25 crc kubenswrapper[4667]: I0131 04:50:25.674961 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4kjf7/must-gather-cc9zj" Jan 31 04:50:25 crc kubenswrapper[4667]: I0131 04:50:25.769146 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkfhn\" (UniqueName: \"kubernetes.io/projected/e85995d0-41be-46b4-bb54-bc7e234abbaa-kube-api-access-nkfhn\") pod \"e85995d0-41be-46b4-bb54-bc7e234abbaa\" (UID: \"e85995d0-41be-46b4-bb54-bc7e234abbaa\") " Jan 31 04:50:25 crc kubenswrapper[4667]: I0131 04:50:25.769326 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e85995d0-41be-46b4-bb54-bc7e234abbaa-must-gather-output\") pod \"e85995d0-41be-46b4-bb54-bc7e234abbaa\" (UID: \"e85995d0-41be-46b4-bb54-bc7e234abbaa\") " Jan 31 04:50:25 crc kubenswrapper[4667]: I0131 04:50:25.778494 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e85995d0-41be-46b4-bb54-bc7e234abbaa-kube-api-access-nkfhn" (OuterVolumeSpecName: "kube-api-access-nkfhn") pod "e85995d0-41be-46b4-bb54-bc7e234abbaa" (UID: "e85995d0-41be-46b4-bb54-bc7e234abbaa"). InnerVolumeSpecName "kube-api-access-nkfhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:50:25 crc kubenswrapper[4667]: I0131 04:50:25.810445 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4kjf7_must-gather-cc9zj_e85995d0-41be-46b4-bb54-bc7e234abbaa/copy/0.log" Jan 31 04:50:25 crc kubenswrapper[4667]: I0131 04:50:25.815891 4667 generic.go:334] "Generic (PLEG): container finished" podID="e85995d0-41be-46b4-bb54-bc7e234abbaa" containerID="7638ae809d4c51cf46f318e5ec87f088ad81221328e9dd6f3e7f8fc8b25d621f" exitCode=143 Jan 31 04:50:25 crc kubenswrapper[4667]: I0131 04:50:25.815953 4667 scope.go:117] "RemoveContainer" containerID="7638ae809d4c51cf46f318e5ec87f088ad81221328e9dd6f3e7f8fc8b25d621f" Jan 31 04:50:25 crc kubenswrapper[4667]: I0131 04:50:25.816117 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4kjf7/must-gather-cc9zj" Jan 31 04:50:25 crc kubenswrapper[4667]: I0131 04:50:25.841271 4667 scope.go:117] "RemoveContainer" containerID="67f992d856b9b986303597735ca1c1c154e685d36aa7a3a838b3f171fa1de614" Jan 31 04:50:25 crc kubenswrapper[4667]: I0131 04:50:25.874756 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkfhn\" (UniqueName: \"kubernetes.io/projected/e85995d0-41be-46b4-bb54-bc7e234abbaa-kube-api-access-nkfhn\") on node \"crc\" DevicePath \"\"" Jan 31 04:50:25 crc kubenswrapper[4667]: I0131 04:50:25.888213 4667 scope.go:117] "RemoveContainer" containerID="7638ae809d4c51cf46f318e5ec87f088ad81221328e9dd6f3e7f8fc8b25d621f" Jan 31 04:50:25 crc kubenswrapper[4667]: E0131 04:50:25.888804 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7638ae809d4c51cf46f318e5ec87f088ad81221328e9dd6f3e7f8fc8b25d621f\": container with ID starting with 7638ae809d4c51cf46f318e5ec87f088ad81221328e9dd6f3e7f8fc8b25d621f not found: ID does not exist" containerID="7638ae809d4c51cf46f318e5ec87f088ad81221328e9dd6f3e7f8fc8b25d621f" Jan 31 04:50:25 crc kubenswrapper[4667]: I0131 04:50:25.889199 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7638ae809d4c51cf46f318e5ec87f088ad81221328e9dd6f3e7f8fc8b25d621f"} err="failed to get container status \"7638ae809d4c51cf46f318e5ec87f088ad81221328e9dd6f3e7f8fc8b25d621f\": rpc error: code = NotFound desc = could not find container \"7638ae809d4c51cf46f318e5ec87f088ad81221328e9dd6f3e7f8fc8b25d621f\": container with ID starting with 7638ae809d4c51cf46f318e5ec87f088ad81221328e9dd6f3e7f8fc8b25d621f not found: ID does not exist" Jan 31 04:50:25 crc kubenswrapper[4667]: I0131 04:50:25.889231 4667 scope.go:117] "RemoveContainer" containerID="67f992d856b9b986303597735ca1c1c154e685d36aa7a3a838b3f171fa1de614" Jan 31 04:50:25 crc kubenswrapper[4667]: E0131 04:50:25.889707 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67f992d856b9b986303597735ca1c1c154e685d36aa7a3a838b3f171fa1de614\": container with ID starting with 67f992d856b9b986303597735ca1c1c154e685d36aa7a3a838b3f171fa1de614 not found: ID does not exist" containerID="67f992d856b9b986303597735ca1c1c154e685d36aa7a3a838b3f171fa1de614" Jan 31 04:50:25 crc kubenswrapper[4667]: I0131 04:50:25.889757 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67f992d856b9b986303597735ca1c1c154e685d36aa7a3a838b3f171fa1de614"} err="failed to get container status \"67f992d856b9b986303597735ca1c1c154e685d36aa7a3a838b3f171fa1de614\": rpc error: code = NotFound desc = could not find container \"67f992d856b9b986303597735ca1c1c154e685d36aa7a3a838b3f171fa1de614\": container with ID starting with 67f992d856b9b986303597735ca1c1c154e685d36aa7a3a838b3f171fa1de614 not found: ID does not exist" Jan 31 04:50:25 crc kubenswrapper[4667]: I0131 04:50:25.970894 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e85995d0-41be-46b4-bb54-bc7e234abbaa-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "e85995d0-41be-46b4-bb54-bc7e234abbaa" (UID: "e85995d0-41be-46b4-bb54-bc7e234abbaa"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:50:25 crc kubenswrapper[4667]: I0131 04:50:25.977407 4667 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e85995d0-41be-46b4-bb54-bc7e234abbaa-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 31 04:50:27 crc kubenswrapper[4667]: I0131 04:50:27.297140 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e85995d0-41be-46b4-bb54-bc7e234abbaa" path="/var/lib/kubelet/pods/e85995d0-41be-46b4-bb54-bc7e234abbaa/volumes" Jan 31 04:51:15 crc kubenswrapper[4667]: I0131 04:51:15.704470 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:51:15 crc kubenswrapper[4667]: I0131 04:51:15.705590 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:51:45 crc kubenswrapper[4667]: I0131 04:51:45.704961 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:51:45 crc kubenswrapper[4667]: I0131 04:51:45.705757 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:52:15 crc kubenswrapper[4667]: I0131 04:52:15.705037 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:52:15 crc kubenswrapper[4667]: I0131 04:52:15.705807 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 04:52:15 crc kubenswrapper[4667]: I0131 04:52:15.705902 4667 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" Jan 31 04:52:15 crc kubenswrapper[4667]: I0131 04:52:15.706922 4667 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3e5f360efff2cb2fbf8b3bd6a7305f45746603d4236d552d6b00acba8bf03353"} pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 04:52:15 crc kubenswrapper[4667]: I0131 04:52:15.707041 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" containerID="cri-o://3e5f360efff2cb2fbf8b3bd6a7305f45746603d4236d552d6b00acba8bf03353" gracePeriod=600 Jan 31 04:52:15 crc kubenswrapper[4667]: E0131 04:52:15.831616 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:52:16 crc kubenswrapper[4667]: I0131 04:52:16.247317 4667 generic.go:334] "Generic (PLEG): container finished" podID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerID="3e5f360efff2cb2fbf8b3bd6a7305f45746603d4236d552d6b00acba8bf03353" exitCode=0 Jan 31 04:52:16 crc kubenswrapper[4667]: I0131 04:52:16.247431 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" event={"ID":"b103bbd2-fb5d-4b2a-8b01-c32f699757df","Type":"ContainerDied","Data":"3e5f360efff2cb2fbf8b3bd6a7305f45746603d4236d552d6b00acba8bf03353"} Jan 31 04:52:16 crc kubenswrapper[4667]: I0131 04:52:16.247746 4667 scope.go:117] "RemoveContainer" containerID="e341e89dc7f476e223c40775a5e4d587a91f2ee93cf266bd05345986bb0bfd8b" Jan 31 04:52:16 crc kubenswrapper[4667]: I0131 04:52:16.248703 4667 scope.go:117] "RemoveContainer" containerID="3e5f360efff2cb2fbf8b3bd6a7305f45746603d4236d552d6b00acba8bf03353" Jan 31 04:52:16 crc kubenswrapper[4667]: E0131 04:52:16.249196 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:52:28 crc kubenswrapper[4667]: I0131 04:52:28.281682 4667 scope.go:117] "RemoveContainer" containerID="3e5f360efff2cb2fbf8b3bd6a7305f45746603d4236d552d6b00acba8bf03353" Jan 31 04:52:28 crc kubenswrapper[4667]: E0131 04:52:28.282506 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:52:43 crc kubenswrapper[4667]: I0131 04:52:43.282524 4667 scope.go:117] "RemoveContainer" containerID="3e5f360efff2cb2fbf8b3bd6a7305f45746603d4236d552d6b00acba8bf03353" Jan 31 04:52:43 crc kubenswrapper[4667]: E0131 04:52:43.283331 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:52:58 crc kubenswrapper[4667]: I0131 04:52:58.282484 4667 scope.go:117] "RemoveContainer" containerID="3e5f360efff2cb2fbf8b3bd6a7305f45746603d4236d552d6b00acba8bf03353" Jan 31 04:52:58 crc kubenswrapper[4667]: E0131 04:52:58.283643 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:53:12 crc kubenswrapper[4667]: I0131 04:53:12.282065 4667 scope.go:117] "RemoveContainer" containerID="3e5f360efff2cb2fbf8b3bd6a7305f45746603d4236d552d6b00acba8bf03353" Jan 31 04:53:12 crc kubenswrapper[4667]: E0131 04:53:12.282834 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:53:18 crc kubenswrapper[4667]: I0131 04:53:18.678497 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rdrrt/must-gather-7sd6m"] Jan 31 04:53:18 crc kubenswrapper[4667]: E0131 04:53:18.679499 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f182b843-409b-4eff-b89d-fb183f39c238" containerName="registry-server" Jan 31 04:53:18 crc kubenswrapper[4667]: I0131 04:53:18.679511 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="f182b843-409b-4eff-b89d-fb183f39c238" containerName="registry-server" Jan 31 04:53:18 crc kubenswrapper[4667]: E0131 04:53:18.679525 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e85995d0-41be-46b4-bb54-bc7e234abbaa" containerName="gather" Jan 31 04:53:18 crc kubenswrapper[4667]: I0131 04:53:18.679531 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="e85995d0-41be-46b4-bb54-bc7e234abbaa" containerName="gather" Jan 31 04:53:18 crc kubenswrapper[4667]: E0131 04:53:18.679546 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f182b843-409b-4eff-b89d-fb183f39c238" containerName="extract-content" Jan 31 04:53:18 crc kubenswrapper[4667]: I0131 04:53:18.679552 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="f182b843-409b-4eff-b89d-fb183f39c238" containerName="extract-content" Jan 31 04:53:18 crc kubenswrapper[4667]: E0131 04:53:18.679562 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f182b843-409b-4eff-b89d-fb183f39c238" containerName="extract-utilities" Jan 31 04:53:18 crc kubenswrapper[4667]: I0131 04:53:18.679571 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="f182b843-409b-4eff-b89d-fb183f39c238" containerName="extract-utilities" Jan 31 04:53:18 crc kubenswrapper[4667]: E0131 04:53:18.679602 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e85995d0-41be-46b4-bb54-bc7e234abbaa" containerName="copy" Jan 31 04:53:18 crc kubenswrapper[4667]: I0131 04:53:18.679607 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="e85995d0-41be-46b4-bb54-bc7e234abbaa" containerName="copy" Jan 31 04:53:18 crc kubenswrapper[4667]: I0131 04:53:18.679772 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="f182b843-409b-4eff-b89d-fb183f39c238" containerName="registry-server" Jan 31 04:53:18 crc kubenswrapper[4667]: I0131 04:53:18.679783 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="e85995d0-41be-46b4-bb54-bc7e234abbaa" containerName="copy" Jan 31 04:53:18 crc kubenswrapper[4667]: I0131 04:53:18.679801 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="e85995d0-41be-46b4-bb54-bc7e234abbaa" containerName="gather" Jan 31 04:53:18 crc kubenswrapper[4667]: I0131 04:53:18.698953 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rdrrt/must-gather-7sd6m"] Jan 31 04:53:18 crc kubenswrapper[4667]: I0131 04:53:18.699049 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rdrrt/must-gather-7sd6m" Jan 31 04:53:18 crc kubenswrapper[4667]: I0131 04:53:18.700389 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-rdrrt"/"default-dockercfg-dpsfr" Jan 31 04:53:18 crc kubenswrapper[4667]: I0131 04:53:18.701294 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-rdrrt"/"openshift-service-ca.crt" Jan 31 04:53:18 crc kubenswrapper[4667]: I0131 04:53:18.702093 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-rdrrt"/"kube-root-ca.crt" Jan 31 04:53:18 crc kubenswrapper[4667]: I0131 04:53:18.759049 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6300166c-bced-499e-b7f3-1238570ddc71-must-gather-output\") pod \"must-gather-7sd6m\" (UID: \"6300166c-bced-499e-b7f3-1238570ddc71\") " pod="openshift-must-gather-rdrrt/must-gather-7sd6m" Jan 31 04:53:18 crc kubenswrapper[4667]: I0131 04:53:18.759115 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzfq9\" (UniqueName: \"kubernetes.io/projected/6300166c-bced-499e-b7f3-1238570ddc71-kube-api-access-pzfq9\") pod \"must-gather-7sd6m\" (UID: \"6300166c-bced-499e-b7f3-1238570ddc71\") " pod="openshift-must-gather-rdrrt/must-gather-7sd6m" Jan 31 04:53:18 crc kubenswrapper[4667]: I0131 04:53:18.861518 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6300166c-bced-499e-b7f3-1238570ddc71-must-gather-output\") pod \"must-gather-7sd6m\" (UID: \"6300166c-bced-499e-b7f3-1238570ddc71\") " pod="openshift-must-gather-rdrrt/must-gather-7sd6m" Jan 31 04:53:18 crc kubenswrapper[4667]: I0131 04:53:18.861610 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzfq9\" (UniqueName: \"kubernetes.io/projected/6300166c-bced-499e-b7f3-1238570ddc71-kube-api-access-pzfq9\") pod \"must-gather-7sd6m\" (UID: \"6300166c-bced-499e-b7f3-1238570ddc71\") " pod="openshift-must-gather-rdrrt/must-gather-7sd6m" Jan 31 04:53:18 crc kubenswrapper[4667]: I0131 04:53:18.861972 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6300166c-bced-499e-b7f3-1238570ddc71-must-gather-output\") pod \"must-gather-7sd6m\" (UID: \"6300166c-bced-499e-b7f3-1238570ddc71\") " pod="openshift-must-gather-rdrrt/must-gather-7sd6m" Jan 31 04:53:18 crc kubenswrapper[4667]: I0131 04:53:18.888244 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzfq9\" (UniqueName: \"kubernetes.io/projected/6300166c-bced-499e-b7f3-1238570ddc71-kube-api-access-pzfq9\") pod \"must-gather-7sd6m\" (UID: \"6300166c-bced-499e-b7f3-1238570ddc71\") " pod="openshift-must-gather-rdrrt/must-gather-7sd6m" Jan 31 04:53:19 crc kubenswrapper[4667]: I0131 04:53:19.034851 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rdrrt/must-gather-7sd6m" Jan 31 04:53:19 crc kubenswrapper[4667]: I0131 04:53:19.536684 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rdrrt/must-gather-7sd6m"] Jan 31 04:53:19 crc kubenswrapper[4667]: I0131 04:53:19.916791 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rdrrt/must-gather-7sd6m" event={"ID":"6300166c-bced-499e-b7f3-1238570ddc71","Type":"ContainerStarted","Data":"6e51e9ecda6475a88c0a2bc4c2976367e222fb37581f05079d6875f4a56412ee"} Jan 31 04:53:19 crc kubenswrapper[4667]: I0131 04:53:19.916949 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rdrrt/must-gather-7sd6m" event={"ID":"6300166c-bced-499e-b7f3-1238570ddc71","Type":"ContainerStarted","Data":"be8497c99cbc66c0bd02235457acaa41b60354877712c6fc5137c3eb7bf19fad"} Jan 31 04:53:20 crc kubenswrapper[4667]: I0131 04:53:20.932109 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rdrrt/must-gather-7sd6m" event={"ID":"6300166c-bced-499e-b7f3-1238570ddc71","Type":"ContainerStarted","Data":"d8709268d29d67488b93f0bf3fda1bf23882e929ca9a67cda4ecd2d49c808cc1"} Jan 31 04:53:20 crc kubenswrapper[4667]: I0131 04:53:20.959092 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-rdrrt/must-gather-7sd6m" podStartSLOduration=2.959071455 podStartE2EDuration="2.959071455s" podCreationTimestamp="2026-01-31 04:53:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:53:20.955494541 +0000 UTC m=+3924.471829840" watchObservedRunningTime="2026-01-31 04:53:20.959071455 +0000 UTC m=+3924.475406764" Jan 31 04:53:24 crc kubenswrapper[4667]: I0131 04:53:24.389111 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rdrrt/crc-debug-g5nlt"] Jan 31 04:53:24 crc kubenswrapper[4667]: I0131 04:53:24.393183 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rdrrt/crc-debug-g5nlt" Jan 31 04:53:24 crc kubenswrapper[4667]: I0131 04:53:24.489238 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4383a994-326e-4094-a8af-a75da55b4acc-host\") pod \"crc-debug-g5nlt\" (UID: \"4383a994-326e-4094-a8af-a75da55b4acc\") " pod="openshift-must-gather-rdrrt/crc-debug-g5nlt" Jan 31 04:53:24 crc kubenswrapper[4667]: I0131 04:53:24.489326 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp4gq\" (UniqueName: \"kubernetes.io/projected/4383a994-326e-4094-a8af-a75da55b4acc-kube-api-access-kp4gq\") pod \"crc-debug-g5nlt\" (UID: \"4383a994-326e-4094-a8af-a75da55b4acc\") " pod="openshift-must-gather-rdrrt/crc-debug-g5nlt" Jan 31 04:53:24 crc kubenswrapper[4667]: I0131 04:53:24.591788 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4383a994-326e-4094-a8af-a75da55b4acc-host\") pod \"crc-debug-g5nlt\" (UID: \"4383a994-326e-4094-a8af-a75da55b4acc\") " pod="openshift-must-gather-rdrrt/crc-debug-g5nlt" Jan 31 04:53:24 crc kubenswrapper[4667]: I0131 04:53:24.591920 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kp4gq\" (UniqueName: \"kubernetes.io/projected/4383a994-326e-4094-a8af-a75da55b4acc-kube-api-access-kp4gq\") pod \"crc-debug-g5nlt\" (UID: \"4383a994-326e-4094-a8af-a75da55b4acc\") " pod="openshift-must-gather-rdrrt/crc-debug-g5nlt" Jan 31 04:53:24 crc kubenswrapper[4667]: I0131 04:53:24.591974 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4383a994-326e-4094-a8af-a75da55b4acc-host\") pod \"crc-debug-g5nlt\" (UID: \"4383a994-326e-4094-a8af-a75da55b4acc\") " pod="openshift-must-gather-rdrrt/crc-debug-g5nlt" Jan 31 04:53:24 crc kubenswrapper[4667]: I0131 04:53:24.611293 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kp4gq\" (UniqueName: \"kubernetes.io/projected/4383a994-326e-4094-a8af-a75da55b4acc-kube-api-access-kp4gq\") pod \"crc-debug-g5nlt\" (UID: \"4383a994-326e-4094-a8af-a75da55b4acc\") " pod="openshift-must-gather-rdrrt/crc-debug-g5nlt" Jan 31 04:53:24 crc kubenswrapper[4667]: I0131 04:53:24.724298 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rdrrt/crc-debug-g5nlt" Jan 31 04:53:24 crc kubenswrapper[4667]: I0131 04:53:24.971779 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rdrrt/crc-debug-g5nlt" event={"ID":"4383a994-326e-4094-a8af-a75da55b4acc","Type":"ContainerStarted","Data":"b716dfa69d5cd6449932a408302e2f0b69fbf489b3ce2678321ec3b0328d7bfc"} Jan 31 04:53:25 crc kubenswrapper[4667]: I0131 04:53:25.981513 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rdrrt/crc-debug-g5nlt" event={"ID":"4383a994-326e-4094-a8af-a75da55b4acc","Type":"ContainerStarted","Data":"bb45ee5d16d1424d7959d487ffb9d2999f56a52e3cd18db231dd5cd932e356c5"} Jan 31 04:53:26 crc kubenswrapper[4667]: I0131 04:53:26.010260 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-rdrrt/crc-debug-g5nlt" podStartSLOduration=2.010237286 podStartE2EDuration="2.010237286s" podCreationTimestamp="2026-01-31 04:53:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 04:53:26.001896497 +0000 UTC m=+3929.518231796" watchObservedRunningTime="2026-01-31 04:53:26.010237286 +0000 UTC m=+3929.526572585" Jan 31 04:53:27 crc kubenswrapper[4667]: I0131 04:53:27.287097 4667 scope.go:117] "RemoveContainer" containerID="3e5f360efff2cb2fbf8b3bd6a7305f45746603d4236d552d6b00acba8bf03353" Jan 31 04:53:27 crc kubenswrapper[4667]: E0131 04:53:27.287678 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:53:42 crc kubenswrapper[4667]: I0131 04:53:42.282236 4667 scope.go:117] "RemoveContainer" containerID="3e5f360efff2cb2fbf8b3bd6a7305f45746603d4236d552d6b00acba8bf03353" Jan 31 04:53:42 crc kubenswrapper[4667]: E0131 04:53:42.283601 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:53:53 crc kubenswrapper[4667]: I0131 04:53:53.282462 4667 scope.go:117] "RemoveContainer" containerID="3e5f360efff2cb2fbf8b3bd6a7305f45746603d4236d552d6b00acba8bf03353" Jan 31 04:53:53 crc kubenswrapper[4667]: E0131 04:53:53.283256 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:54:03 crc kubenswrapper[4667]: I0131 04:54:03.354190 4667 generic.go:334] "Generic (PLEG): container finished" podID="4383a994-326e-4094-a8af-a75da55b4acc" containerID="bb45ee5d16d1424d7959d487ffb9d2999f56a52e3cd18db231dd5cd932e356c5" exitCode=0 Jan 31 04:54:03 crc kubenswrapper[4667]: I0131 04:54:03.354306 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rdrrt/crc-debug-g5nlt" event={"ID":"4383a994-326e-4094-a8af-a75da55b4acc","Type":"ContainerDied","Data":"bb45ee5d16d1424d7959d487ffb9d2999f56a52e3cd18db231dd5cd932e356c5"} Jan 31 04:54:04 crc kubenswrapper[4667]: I0131 04:54:04.462408 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rdrrt/crc-debug-g5nlt" Jan 31 04:54:04 crc kubenswrapper[4667]: I0131 04:54:04.494695 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rdrrt/crc-debug-g5nlt"] Jan 31 04:54:04 crc kubenswrapper[4667]: I0131 04:54:04.505658 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rdrrt/crc-debug-g5nlt"] Jan 31 04:54:04 crc kubenswrapper[4667]: I0131 04:54:04.556612 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kp4gq\" (UniqueName: \"kubernetes.io/projected/4383a994-326e-4094-a8af-a75da55b4acc-kube-api-access-kp4gq\") pod \"4383a994-326e-4094-a8af-a75da55b4acc\" (UID: \"4383a994-326e-4094-a8af-a75da55b4acc\") " Jan 31 04:54:04 crc kubenswrapper[4667]: I0131 04:54:04.557096 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4383a994-326e-4094-a8af-a75da55b4acc-host\") pod \"4383a994-326e-4094-a8af-a75da55b4acc\" (UID: \"4383a994-326e-4094-a8af-a75da55b4acc\") " Jan 31 04:54:04 crc kubenswrapper[4667]: I0131 04:54:04.557215 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4383a994-326e-4094-a8af-a75da55b4acc-host" (OuterVolumeSpecName: "host") pod "4383a994-326e-4094-a8af-a75da55b4acc" (UID: "4383a994-326e-4094-a8af-a75da55b4acc"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:54:04 crc kubenswrapper[4667]: I0131 04:54:04.557962 4667 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4383a994-326e-4094-a8af-a75da55b4acc-host\") on node \"crc\" DevicePath \"\"" Jan 31 04:54:04 crc kubenswrapper[4667]: I0131 04:54:04.575084 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4383a994-326e-4094-a8af-a75da55b4acc-kube-api-access-kp4gq" (OuterVolumeSpecName: "kube-api-access-kp4gq") pod "4383a994-326e-4094-a8af-a75da55b4acc" (UID: "4383a994-326e-4094-a8af-a75da55b4acc"). InnerVolumeSpecName "kube-api-access-kp4gq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:54:04 crc kubenswrapper[4667]: I0131 04:54:04.660279 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kp4gq\" (UniqueName: \"kubernetes.io/projected/4383a994-326e-4094-a8af-a75da55b4acc-kube-api-access-kp4gq\") on node \"crc\" DevicePath \"\"" Jan 31 04:54:05 crc kubenswrapper[4667]: I0131 04:54:05.296529 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4383a994-326e-4094-a8af-a75da55b4acc" path="/var/lib/kubelet/pods/4383a994-326e-4094-a8af-a75da55b4acc/volumes" Jan 31 04:54:05 crc kubenswrapper[4667]: I0131 04:54:05.372460 4667 scope.go:117] "RemoveContainer" containerID="bb45ee5d16d1424d7959d487ffb9d2999f56a52e3cd18db231dd5cd932e356c5" Jan 31 04:54:05 crc kubenswrapper[4667]: I0131 04:54:05.372565 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rdrrt/crc-debug-g5nlt" Jan 31 04:54:05 crc kubenswrapper[4667]: I0131 04:54:05.838033 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rdrrt/crc-debug-dpsv7"] Jan 31 04:54:05 crc kubenswrapper[4667]: E0131 04:54:05.838713 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4383a994-326e-4094-a8af-a75da55b4acc" containerName="container-00" Jan 31 04:54:05 crc kubenswrapper[4667]: I0131 04:54:05.838736 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="4383a994-326e-4094-a8af-a75da55b4acc" containerName="container-00" Jan 31 04:54:05 crc kubenswrapper[4667]: I0131 04:54:05.839031 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="4383a994-326e-4094-a8af-a75da55b4acc" containerName="container-00" Jan 31 04:54:05 crc kubenswrapper[4667]: I0131 04:54:05.840082 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rdrrt/crc-debug-dpsv7" Jan 31 04:54:05 crc kubenswrapper[4667]: I0131 04:54:05.892185 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7v78\" (UniqueName: \"kubernetes.io/projected/43d8927f-9615-4e2c-9d59-e8f1d89ad2f1-kube-api-access-x7v78\") pod \"crc-debug-dpsv7\" (UID: \"43d8927f-9615-4e2c-9d59-e8f1d89ad2f1\") " pod="openshift-must-gather-rdrrt/crc-debug-dpsv7" Jan 31 04:54:05 crc kubenswrapper[4667]: I0131 04:54:05.892485 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/43d8927f-9615-4e2c-9d59-e8f1d89ad2f1-host\") pod \"crc-debug-dpsv7\" (UID: \"43d8927f-9615-4e2c-9d59-e8f1d89ad2f1\") " pod="openshift-must-gather-rdrrt/crc-debug-dpsv7" Jan 31 04:54:05 crc kubenswrapper[4667]: I0131 04:54:05.993885 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7v78\" (UniqueName: \"kubernetes.io/projected/43d8927f-9615-4e2c-9d59-e8f1d89ad2f1-kube-api-access-x7v78\") pod \"crc-debug-dpsv7\" (UID: \"43d8927f-9615-4e2c-9d59-e8f1d89ad2f1\") " pod="openshift-must-gather-rdrrt/crc-debug-dpsv7" Jan 31 04:54:05 crc kubenswrapper[4667]: I0131 04:54:05.993968 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/43d8927f-9615-4e2c-9d59-e8f1d89ad2f1-host\") pod \"crc-debug-dpsv7\" (UID: \"43d8927f-9615-4e2c-9d59-e8f1d89ad2f1\") " pod="openshift-must-gather-rdrrt/crc-debug-dpsv7" Jan 31 04:54:05 crc kubenswrapper[4667]: I0131 04:54:05.994208 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/43d8927f-9615-4e2c-9d59-e8f1d89ad2f1-host\") pod \"crc-debug-dpsv7\" (UID: \"43d8927f-9615-4e2c-9d59-e8f1d89ad2f1\") " pod="openshift-must-gather-rdrrt/crc-debug-dpsv7" Jan 31 04:54:06 crc kubenswrapper[4667]: I0131 04:54:06.020098 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7v78\" (UniqueName: \"kubernetes.io/projected/43d8927f-9615-4e2c-9d59-e8f1d89ad2f1-kube-api-access-x7v78\") pod \"crc-debug-dpsv7\" (UID: \"43d8927f-9615-4e2c-9d59-e8f1d89ad2f1\") " pod="openshift-must-gather-rdrrt/crc-debug-dpsv7" Jan 31 04:54:06 crc kubenswrapper[4667]: I0131 04:54:06.172493 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rdrrt/crc-debug-dpsv7" Jan 31 04:54:06 crc kubenswrapper[4667]: I0131 04:54:06.382326 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rdrrt/crc-debug-dpsv7" event={"ID":"43d8927f-9615-4e2c-9d59-e8f1d89ad2f1","Type":"ContainerStarted","Data":"1b41e48bfa36f0adda97a8ae7052f29aa3991a750f9c5090c16d56001f63d6c4"} Jan 31 04:54:07 crc kubenswrapper[4667]: I0131 04:54:07.289710 4667 scope.go:117] "RemoveContainer" containerID="3e5f360efff2cb2fbf8b3bd6a7305f45746603d4236d552d6b00acba8bf03353" Jan 31 04:54:07 crc kubenswrapper[4667]: E0131 04:54:07.290650 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:54:07 crc kubenswrapper[4667]: I0131 04:54:07.391104 4667 generic.go:334] "Generic (PLEG): container finished" podID="43d8927f-9615-4e2c-9d59-e8f1d89ad2f1" containerID="6f6932c8c48275e75a5b0dd2b45961a96a3747c5782eeb89c8ffec574219cd79" exitCode=0 Jan 31 04:54:07 crc kubenswrapper[4667]: I0131 04:54:07.391188 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rdrrt/crc-debug-dpsv7" event={"ID":"43d8927f-9615-4e2c-9d59-e8f1d89ad2f1","Type":"ContainerDied","Data":"6f6932c8c48275e75a5b0dd2b45961a96a3747c5782eeb89c8ffec574219cd79"} Jan 31 04:54:07 crc kubenswrapper[4667]: I0131 04:54:07.757347 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rdrrt/crc-debug-dpsv7"] Jan 31 04:54:07 crc kubenswrapper[4667]: I0131 04:54:07.766249 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rdrrt/crc-debug-dpsv7"] Jan 31 04:54:08 crc kubenswrapper[4667]: I0131 04:54:08.535970 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rdrrt/crc-debug-dpsv7" Jan 31 04:54:08 crc kubenswrapper[4667]: I0131 04:54:08.648615 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7v78\" (UniqueName: \"kubernetes.io/projected/43d8927f-9615-4e2c-9d59-e8f1d89ad2f1-kube-api-access-x7v78\") pod \"43d8927f-9615-4e2c-9d59-e8f1d89ad2f1\" (UID: \"43d8927f-9615-4e2c-9d59-e8f1d89ad2f1\") " Jan 31 04:54:08 crc kubenswrapper[4667]: I0131 04:54:08.648868 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/43d8927f-9615-4e2c-9d59-e8f1d89ad2f1-host\") pod \"43d8927f-9615-4e2c-9d59-e8f1d89ad2f1\" (UID: \"43d8927f-9615-4e2c-9d59-e8f1d89ad2f1\") " Jan 31 04:54:08 crc kubenswrapper[4667]: I0131 04:54:08.649015 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43d8927f-9615-4e2c-9d59-e8f1d89ad2f1-host" (OuterVolumeSpecName: "host") pod "43d8927f-9615-4e2c-9d59-e8f1d89ad2f1" (UID: "43d8927f-9615-4e2c-9d59-e8f1d89ad2f1"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:54:08 crc kubenswrapper[4667]: I0131 04:54:08.650055 4667 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/43d8927f-9615-4e2c-9d59-e8f1d89ad2f1-host\") on node \"crc\" DevicePath \"\"" Jan 31 04:54:08 crc kubenswrapper[4667]: I0131 04:54:08.657132 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43d8927f-9615-4e2c-9d59-e8f1d89ad2f1-kube-api-access-x7v78" (OuterVolumeSpecName: "kube-api-access-x7v78") pod "43d8927f-9615-4e2c-9d59-e8f1d89ad2f1" (UID: "43d8927f-9615-4e2c-9d59-e8f1d89ad2f1"). InnerVolumeSpecName "kube-api-access-x7v78". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:54:08 crc kubenswrapper[4667]: I0131 04:54:08.752340 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7v78\" (UniqueName: \"kubernetes.io/projected/43d8927f-9615-4e2c-9d59-e8f1d89ad2f1-kube-api-access-x7v78\") on node \"crc\" DevicePath \"\"" Jan 31 04:54:09 crc kubenswrapper[4667]: I0131 04:54:09.037646 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rdrrt/crc-debug-nx5gn"] Jan 31 04:54:09 crc kubenswrapper[4667]: E0131 04:54:09.038410 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43d8927f-9615-4e2c-9d59-e8f1d89ad2f1" containerName="container-00" Jan 31 04:54:09 crc kubenswrapper[4667]: I0131 04:54:09.038427 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="43d8927f-9615-4e2c-9d59-e8f1d89ad2f1" containerName="container-00" Jan 31 04:54:09 crc kubenswrapper[4667]: I0131 04:54:09.038641 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="43d8927f-9615-4e2c-9d59-e8f1d89ad2f1" containerName="container-00" Jan 31 04:54:09 crc kubenswrapper[4667]: I0131 04:54:09.039229 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rdrrt/crc-debug-nx5gn" Jan 31 04:54:09 crc kubenswrapper[4667]: I0131 04:54:09.162736 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f2849dfb-294a-4fbf-b738-24934539079b-host\") pod \"crc-debug-nx5gn\" (UID: \"f2849dfb-294a-4fbf-b738-24934539079b\") " pod="openshift-must-gather-rdrrt/crc-debug-nx5gn" Jan 31 04:54:09 crc kubenswrapper[4667]: I0131 04:54:09.162917 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc9qm\" (UniqueName: \"kubernetes.io/projected/f2849dfb-294a-4fbf-b738-24934539079b-kube-api-access-gc9qm\") pod \"crc-debug-nx5gn\" (UID: \"f2849dfb-294a-4fbf-b738-24934539079b\") " pod="openshift-must-gather-rdrrt/crc-debug-nx5gn" Jan 31 04:54:09 crc kubenswrapper[4667]: I0131 04:54:09.264154 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f2849dfb-294a-4fbf-b738-24934539079b-host\") pod \"crc-debug-nx5gn\" (UID: \"f2849dfb-294a-4fbf-b738-24934539079b\") " pod="openshift-must-gather-rdrrt/crc-debug-nx5gn" Jan 31 04:54:09 crc kubenswrapper[4667]: I0131 04:54:09.264276 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc9qm\" (UniqueName: \"kubernetes.io/projected/f2849dfb-294a-4fbf-b738-24934539079b-kube-api-access-gc9qm\") pod \"crc-debug-nx5gn\" (UID: \"f2849dfb-294a-4fbf-b738-24934539079b\") " pod="openshift-must-gather-rdrrt/crc-debug-nx5gn" Jan 31 04:54:09 crc kubenswrapper[4667]: I0131 04:54:09.264592 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f2849dfb-294a-4fbf-b738-24934539079b-host\") pod \"crc-debug-nx5gn\" (UID: \"f2849dfb-294a-4fbf-b738-24934539079b\") " pod="openshift-must-gather-rdrrt/crc-debug-nx5gn" Jan 31 04:54:09 crc kubenswrapper[4667]: I0131 04:54:09.288290 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc9qm\" (UniqueName: \"kubernetes.io/projected/f2849dfb-294a-4fbf-b738-24934539079b-kube-api-access-gc9qm\") pod \"crc-debug-nx5gn\" (UID: \"f2849dfb-294a-4fbf-b738-24934539079b\") " pod="openshift-must-gather-rdrrt/crc-debug-nx5gn" Jan 31 04:54:09 crc kubenswrapper[4667]: I0131 04:54:09.291592 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43d8927f-9615-4e2c-9d59-e8f1d89ad2f1" path="/var/lib/kubelet/pods/43d8927f-9615-4e2c-9d59-e8f1d89ad2f1/volumes" Jan 31 04:54:09 crc kubenswrapper[4667]: I0131 04:54:09.361386 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rdrrt/crc-debug-nx5gn" Jan 31 04:54:09 crc kubenswrapper[4667]: W0131 04:54:09.394546 4667 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2849dfb_294a_4fbf_b738_24934539079b.slice/crio-4e0884c12ee9a9c08f9613e3cad0b88c198643b18192ad7811943c7ca957cd1a WatchSource:0}: Error finding container 4e0884c12ee9a9c08f9613e3cad0b88c198643b18192ad7811943c7ca957cd1a: Status 404 returned error can't find the container with id 4e0884c12ee9a9c08f9613e3cad0b88c198643b18192ad7811943c7ca957cd1a Jan 31 04:54:09 crc kubenswrapper[4667]: I0131 04:54:09.438485 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rdrrt/crc-debug-nx5gn" event={"ID":"f2849dfb-294a-4fbf-b738-24934539079b","Type":"ContainerStarted","Data":"4e0884c12ee9a9c08f9613e3cad0b88c198643b18192ad7811943c7ca957cd1a"} Jan 31 04:54:09 crc kubenswrapper[4667]: I0131 04:54:09.441102 4667 scope.go:117] "RemoveContainer" containerID="6f6932c8c48275e75a5b0dd2b45961a96a3747c5782eeb89c8ffec574219cd79" Jan 31 04:54:09 crc kubenswrapper[4667]: I0131 04:54:09.441254 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rdrrt/crc-debug-dpsv7" Jan 31 04:54:10 crc kubenswrapper[4667]: I0131 04:54:10.450364 4667 generic.go:334] "Generic (PLEG): container finished" podID="f2849dfb-294a-4fbf-b738-24934539079b" containerID="c2ce74d881cfe69e0267db74212814516f883c0f2413a522a588f30c1f8a326e" exitCode=0 Jan 31 04:54:10 crc kubenswrapper[4667]: I0131 04:54:10.450441 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rdrrt/crc-debug-nx5gn" event={"ID":"f2849dfb-294a-4fbf-b738-24934539079b","Type":"ContainerDied","Data":"c2ce74d881cfe69e0267db74212814516f883c0f2413a522a588f30c1f8a326e"} Jan 31 04:54:10 crc kubenswrapper[4667]: I0131 04:54:10.486992 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rdrrt/crc-debug-nx5gn"] Jan 31 04:54:10 crc kubenswrapper[4667]: I0131 04:54:10.496178 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rdrrt/crc-debug-nx5gn"] Jan 31 04:54:11 crc kubenswrapper[4667]: I0131 04:54:11.559390 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rdrrt/crc-debug-nx5gn" Jan 31 04:54:11 crc kubenswrapper[4667]: I0131 04:54:11.710735 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f2849dfb-294a-4fbf-b738-24934539079b-host\") pod \"f2849dfb-294a-4fbf-b738-24934539079b\" (UID: \"f2849dfb-294a-4fbf-b738-24934539079b\") " Jan 31 04:54:11 crc kubenswrapper[4667]: I0131 04:54:11.710830 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2849dfb-294a-4fbf-b738-24934539079b-host" (OuterVolumeSpecName: "host") pod "f2849dfb-294a-4fbf-b738-24934539079b" (UID: "f2849dfb-294a-4fbf-b738-24934539079b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 04:54:11 crc kubenswrapper[4667]: I0131 04:54:11.710937 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gc9qm\" (UniqueName: \"kubernetes.io/projected/f2849dfb-294a-4fbf-b738-24934539079b-kube-api-access-gc9qm\") pod \"f2849dfb-294a-4fbf-b738-24934539079b\" (UID: \"f2849dfb-294a-4fbf-b738-24934539079b\") " Jan 31 04:54:11 crc kubenswrapper[4667]: I0131 04:54:11.711625 4667 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f2849dfb-294a-4fbf-b738-24934539079b-host\") on node \"crc\" DevicePath \"\"" Jan 31 04:54:11 crc kubenswrapper[4667]: I0131 04:54:11.717187 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2849dfb-294a-4fbf-b738-24934539079b-kube-api-access-gc9qm" (OuterVolumeSpecName: "kube-api-access-gc9qm") pod "f2849dfb-294a-4fbf-b738-24934539079b" (UID: "f2849dfb-294a-4fbf-b738-24934539079b"). InnerVolumeSpecName "kube-api-access-gc9qm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:54:11 crc kubenswrapper[4667]: I0131 04:54:11.813167 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gc9qm\" (UniqueName: \"kubernetes.io/projected/f2849dfb-294a-4fbf-b738-24934539079b-kube-api-access-gc9qm\") on node \"crc\" DevicePath \"\"" Jan 31 04:54:12 crc kubenswrapper[4667]: I0131 04:54:12.471303 4667 scope.go:117] "RemoveContainer" containerID="c2ce74d881cfe69e0267db74212814516f883c0f2413a522a588f30c1f8a326e" Jan 31 04:54:12 crc kubenswrapper[4667]: I0131 04:54:12.471978 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rdrrt/crc-debug-nx5gn" Jan 31 04:54:13 crc kubenswrapper[4667]: I0131 04:54:13.293700 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2849dfb-294a-4fbf-b738-24934539079b" path="/var/lib/kubelet/pods/f2849dfb-294a-4fbf-b738-24934539079b/volumes" Jan 31 04:54:18 crc kubenswrapper[4667]: I0131 04:54:18.281940 4667 scope.go:117] "RemoveContainer" containerID="3e5f360efff2cb2fbf8b3bd6a7305f45746603d4236d552d6b00acba8bf03353" Jan 31 04:54:18 crc kubenswrapper[4667]: E0131 04:54:18.282830 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:54:29 crc kubenswrapper[4667]: I0131 04:54:29.282866 4667 scope.go:117] "RemoveContainer" containerID="3e5f360efff2cb2fbf8b3bd6a7305f45746603d4236d552d6b00acba8bf03353" Jan 31 04:54:29 crc kubenswrapper[4667]: E0131 04:54:29.284916 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:54:41 crc kubenswrapper[4667]: I0131 04:54:41.282353 4667 scope.go:117] "RemoveContainer" containerID="3e5f360efff2cb2fbf8b3bd6a7305f45746603d4236d552d6b00acba8bf03353" Jan 31 04:54:41 crc kubenswrapper[4667]: E0131 04:54:41.283550 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:54:55 crc kubenswrapper[4667]: I0131 04:54:55.284549 4667 scope.go:117] "RemoveContainer" containerID="3e5f360efff2cb2fbf8b3bd6a7305f45746603d4236d552d6b00acba8bf03353" Jan 31 04:54:55 crc kubenswrapper[4667]: E0131 04:54:55.285311 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:54:58 crc kubenswrapper[4667]: I0131 04:54:58.907820 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7b9d9fcc56-wmjp8_d8b59858-7b18-4bad-b555-b978f3fbea56/barbican-api/0.log" Jan 31 04:54:59 crc kubenswrapper[4667]: I0131 04:54:59.218163 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5d8b947646-tj8c8_c6848ab0-06c2-4eed-9c5e-a1e205da260a/barbican-keystone-listener/0.log" Jan 31 04:55:00 crc kubenswrapper[4667]: I0131 04:55:00.309929 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-666d645645-4kb44_efc85fb0-e1c4-4a14-aeeb-a0526ff668d1/barbican-worker/0.log" Jan 31 04:55:00 crc kubenswrapper[4667]: I0131 04:55:00.529404 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5d8b947646-tj8c8_c6848ab0-06c2-4eed-9c5e-a1e205da260a/barbican-keystone-listener-log/0.log" Jan 31 04:55:00 crc kubenswrapper[4667]: I0131 04:55:00.603616 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-666d645645-4kb44_efc85fb0-e1c4-4a14-aeeb-a0526ff668d1/barbican-worker-log/0.log" Jan 31 04:55:00 crc kubenswrapper[4667]: I0131 04:55:00.740568 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7b9d9fcc56-wmjp8_d8b59858-7b18-4bad-b555-b978f3fbea56/barbican-api-log/0.log" Jan 31 04:55:00 crc kubenswrapper[4667]: I0131 04:55:00.851237 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-r92t5_24442823-d584-44f3-bf92-1e3382adb87f/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 04:55:00 crc kubenswrapper[4667]: I0131 04:55:00.921409 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ef1c8a6a-c6c2-451b-9030-9689f2ed116f/ceilometer-central-agent/0.log" Jan 31 04:55:00 crc kubenswrapper[4667]: I0131 04:55:00.977788 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ef1c8a6a-c6c2-451b-9030-9689f2ed116f/ceilometer-notification-agent/0.log" Jan 31 04:55:01 crc kubenswrapper[4667]: I0131 04:55:01.063506 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ef1c8a6a-c6c2-451b-9030-9689f2ed116f/proxy-httpd/0.log" Jan 31 04:55:01 crc kubenswrapper[4667]: I0131 04:55:01.124807 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ef1c8a6a-c6c2-451b-9030-9689f2ed116f/sg-core/0.log" Jan 31 04:55:01 crc kubenswrapper[4667]: I0131 04:55:01.222924 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_10513551-238c-4a99-83c9-2992fb1bbaae/cinder-api/0.log" Jan 31 04:55:01 crc kubenswrapper[4667]: I0131 04:55:01.808832 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_10513551-238c-4a99-83c9-2992fb1bbaae/cinder-api-log/0.log" Jan 31 04:55:01 crc kubenswrapper[4667]: I0131 04:55:01.946085 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c7e2e3d6-d3b6-49cf-b414-0ee3d0c72d6a/cinder-scheduler/0.log" Jan 31 04:55:02 crc kubenswrapper[4667]: I0131 04:55:02.068958 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c7e2e3d6-d3b6-49cf-b414-0ee3d0c72d6a/probe/0.log" Jan 31 04:55:02 crc kubenswrapper[4667]: I0131 04:55:02.210486 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-jpglr_2c49961f-cfd8-428d-b32b-4e3f85e554d5/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 04:55:02 crc kubenswrapper[4667]: I0131 04:55:02.335985 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-6j7k8_f2ba4344-86fc-4f0f-86ed-7daec27549ec/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 04:55:02 crc kubenswrapper[4667]: I0131 04:55:02.508107 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-f4d4c4b7-jpndh_ab755590-ad93-4840-b261-9317b1c0cb54/init/0.log" Jan 31 04:55:02 crc kubenswrapper[4667]: I0131 04:55:02.730912 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-f4d4c4b7-jpndh_ab755590-ad93-4840-b261-9317b1c0cb54/dnsmasq-dns/0.log" Jan 31 04:55:02 crc kubenswrapper[4667]: I0131 04:55:02.746338 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-f4d4c4b7-jpndh_ab755590-ad93-4840-b261-9317b1c0cb54/init/0.log" Jan 31 04:55:02 crc kubenswrapper[4667]: I0131 04:55:02.795592 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-r7phz_15d1c9f5-7546-4262-ada1-71b362ddd67e/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 04:55:03 crc kubenswrapper[4667]: I0131 04:55:03.055880 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_e2e0a40d-c35f-443a-97b3-0150c13d56e4/glance-httpd/0.log" Jan 31 04:55:03 crc kubenswrapper[4667]: I0131 04:55:03.056758 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_e2e0a40d-c35f-443a-97b3-0150c13d56e4/glance-log/0.log" Jan 31 04:55:03 crc kubenswrapper[4667]: I0131 04:55:03.255534 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b9aae903-8070-44c6-8826-ec0ff7d90139/glance-httpd/0.log" Jan 31 04:55:03 crc kubenswrapper[4667]: I0131 04:55:03.308771 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b9aae903-8070-44c6-8826-ec0ff7d90139/glance-log/0.log" Jan 31 04:55:03 crc kubenswrapper[4667]: I0131 04:55:03.449488 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-86c748c4d6-2grmh_c6974567-3bea-447a-bb8b-ced22b6d34ce/horizon/3.log" Jan 31 04:55:03 crc kubenswrapper[4667]: I0131 04:55:03.686013 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-86c748c4d6-2grmh_c6974567-3bea-447a-bb8b-ced22b6d34ce/horizon/2.log" Jan 31 04:55:03 crc kubenswrapper[4667]: I0131 04:55:03.770385 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-pcntk_f33e0c1e-9f27-49a0-8132-0516b49d5ceb/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 04:55:04 crc kubenswrapper[4667]: I0131 04:55:04.004276 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-86c748c4d6-2grmh_c6974567-3bea-447a-bb8b-ced22b6d34ce/horizon-log/0.log" Jan 31 04:55:04 crc kubenswrapper[4667]: I0131 04:55:04.051999 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-2xhw7_c1426178-3085-452c-8da2-15a2bce73a55/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 04:55:04 crc kubenswrapper[4667]: I0131 04:55:04.245201 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5558665b54-mq2t5_2cf275de-3442-4fe5-ab8b-a4796c0bc829/keystone-api/0.log" Jan 31 04:55:04 crc kubenswrapper[4667]: I0131 04:55:04.319995 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_ee717f47-2475-42f9-b4ce-25960d0fa24c/kube-state-metrics/0.log" Jan 31 04:55:04 crc kubenswrapper[4667]: I0131 04:55:04.535856 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-dtp9k_a8376acd-0ea2-4ac1-a843-59932a976b4e/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 04:55:04 crc kubenswrapper[4667]: I0131 04:55:04.812190 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7f55cc74b5-gg8dl_48966487-81e5-4e5d-9a74-fbbf2b1091ae/neutron-api/0.log" Jan 31 04:55:04 crc kubenswrapper[4667]: I0131 04:55:04.814572 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7f55cc74b5-gg8dl_48966487-81e5-4e5d-9a74-fbbf2b1091ae/neutron-httpd/0.log" Jan 31 04:55:05 crc kubenswrapper[4667]: I0131 04:55:05.023031 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-t2hkb_92bb44a8-6936-4c3f-96f6-b9572d90574d/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 04:55:05 crc kubenswrapper[4667]: I0131 04:55:05.505637 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_162d25d8-8fbe-4a52-808b-971f2017bfc0/nova-api-log/0.log" Jan 31 04:55:05 crc kubenswrapper[4667]: I0131 04:55:05.672122 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_841e82c7-29d0-414e-a01d-05718a83749b/nova-cell0-conductor-conductor/0.log" Jan 31 04:55:05 crc kubenswrapper[4667]: I0131 04:55:05.859325 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_162d25d8-8fbe-4a52-808b-971f2017bfc0/nova-api-api/0.log" Jan 31 04:55:05 crc kubenswrapper[4667]: I0131 04:55:05.921867 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_172b5953-ebb3-4eae-b8ee-33d59574f2ac/nova-cell1-conductor-conductor/0.log" Jan 31 04:55:06 crc kubenswrapper[4667]: I0131 04:55:06.071735 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_4aa65868-008b-4a37-ba24-d4d3872c00c7/nova-cell1-novncproxy-novncproxy/0.log" Jan 31 04:55:06 crc kubenswrapper[4667]: I0131 04:55:06.207736 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-7swhs_8d2d3410-e5e4-4607-ab3c-74199d66293d/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 04:55:06 crc kubenswrapper[4667]: I0131 04:55:06.506464 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_b91fdcb3-e7f6-40d0-97d1-4db13213d61a/nova-metadata-log/0.log" Jan 31 04:55:06 crc kubenswrapper[4667]: I0131 04:55:06.743974 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_cf1db9a1-f45c-41f0-8d76-2c0318f0299b/nova-scheduler-scheduler/0.log" Jan 31 04:55:06 crc kubenswrapper[4667]: I0131 04:55:06.957106 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7/mysql-bootstrap/0.log" Jan 31 04:55:07 crc kubenswrapper[4667]: I0131 04:55:07.101874 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7/galera/0.log" Jan 31 04:55:07 crc kubenswrapper[4667]: I0131 04:55:07.129702 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_ab3081f7-15c2-4f76-9e7c-2c76fb8a9cd7/mysql-bootstrap/0.log" Jan 31 04:55:07 crc kubenswrapper[4667]: I0131 04:55:07.393952 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_fc6e0899-ca0f-4aac-8510-cf35066a3290/mysql-bootstrap/0.log" Jan 31 04:55:07 crc kubenswrapper[4667]: I0131 04:55:07.560154 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_fc6e0899-ca0f-4aac-8510-cf35066a3290/mysql-bootstrap/0.log" Jan 31 04:55:07 crc kubenswrapper[4667]: I0131 04:55:07.587942 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_fc6e0899-ca0f-4aac-8510-cf35066a3290/galera/0.log" Jan 31 04:55:07 crc kubenswrapper[4667]: I0131 04:55:07.803071 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_b91fdcb3-e7f6-40d0-97d1-4db13213d61a/nova-metadata-metadata/0.log" Jan 31 04:55:07 crc kubenswrapper[4667]: I0131 04:55:07.814348 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_c47c09d9-21e3-4c10-936f-0d679cf6a8f1/openstackclient/0.log" Jan 31 04:55:07 crc kubenswrapper[4667]: I0131 04:55:07.967215 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-cn9wc_39c3d98f-a6b1-4558-b565-c9f8c3afa543/ovn-controller/0.log" Jan 31 04:55:08 crc kubenswrapper[4667]: I0131 04:55:08.196018 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-hbhzb_73d60e7c-9a2f-4e04-8b13-31956316c5dc/openstack-network-exporter/0.log" Jan 31 04:55:08 crc kubenswrapper[4667]: I0131 04:55:08.317486 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-m545l_c3c43380-7b18-44fd-98f5-b9016923cdcb/ovsdb-server-init/0.log" Jan 31 04:55:08 crc kubenswrapper[4667]: I0131 04:55:08.520731 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-m545l_c3c43380-7b18-44fd-98f5-b9016923cdcb/ovs-vswitchd/0.log" Jan 31 04:55:08 crc kubenswrapper[4667]: I0131 04:55:08.568775 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-m545l_c3c43380-7b18-44fd-98f5-b9016923cdcb/ovsdb-server/0.log" Jan 31 04:55:08 crc kubenswrapper[4667]: I0131 04:55:08.590614 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-m545l_c3c43380-7b18-44fd-98f5-b9016923cdcb/ovsdb-server-init/0.log" Jan 31 04:55:08 crc kubenswrapper[4667]: I0131 04:55:08.908916 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-rdvv9_68a411c8-a168-43be-997d-d8a1313da926/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 04:55:08 crc kubenswrapper[4667]: I0131 04:55:08.946577 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1e5983aa-121c-4344-884a-438181c3ac0d/openstack-network-exporter/0.log" Jan 31 04:55:09 crc kubenswrapper[4667]: I0131 04:55:09.002219 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1e5983aa-121c-4344-884a-438181c3ac0d/ovn-northd/0.log" Jan 31 04:55:09 crc kubenswrapper[4667]: I0131 04:55:09.198628 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_5b881387-78fb-40db-8985-412849ad9068/ovsdbserver-nb/0.log" Jan 31 04:55:09 crc kubenswrapper[4667]: I0131 04:55:09.253277 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_5b881387-78fb-40db-8985-412849ad9068/openstack-network-exporter/0.log" Jan 31 04:55:09 crc kubenswrapper[4667]: I0131 04:55:09.282018 4667 scope.go:117] "RemoveContainer" containerID="3e5f360efff2cb2fbf8b3bd6a7305f45746603d4236d552d6b00acba8bf03353" Jan 31 04:55:09 crc kubenswrapper[4667]: E0131 04:55:09.282243 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:55:10 crc kubenswrapper[4667]: I0131 04:55:10.096189 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7/openstack-network-exporter/0.log" Jan 31 04:55:10 crc kubenswrapper[4667]: I0131 04:55:10.179920 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_591bfb70-3f82-4aa3-8a1e-9c6f77fb94a7/ovsdbserver-sb/0.log" Jan 31 04:55:10 crc kubenswrapper[4667]: I0131 04:55:10.201555 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5f87b7b68-pjkwf_95dba098-f46c-4948-ab9b-c05d9bf48660/placement-api/0.log" Jan 31 04:55:10 crc kubenswrapper[4667]: I0131 04:55:10.380759 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_acadb76e-2e9d-4af4-a5d1-fb5f28b006c6/setup-container/0.log" Jan 31 04:55:10 crc kubenswrapper[4667]: I0131 04:55:10.402077 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5f87b7b68-pjkwf_95dba098-f46c-4948-ab9b-c05d9bf48660/placement-log/0.log" Jan 31 04:55:10 crc kubenswrapper[4667]: I0131 04:55:10.812621 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_acadb76e-2e9d-4af4-a5d1-fb5f28b006c6/rabbitmq/0.log" Jan 31 04:55:10 crc kubenswrapper[4667]: I0131 04:55:10.826953 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_acadb76e-2e9d-4af4-a5d1-fb5f28b006c6/setup-container/0.log" Jan 31 04:55:10 crc kubenswrapper[4667]: I0131 04:55:10.893862 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_aca13392-5591-4b68-9948-c5e5fe558803/setup-container/0.log" Jan 31 04:55:11 crc kubenswrapper[4667]: I0131 04:55:11.312614 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z56m8"] Jan 31 04:55:11 crc kubenswrapper[4667]: E0131 04:55:11.314544 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2849dfb-294a-4fbf-b738-24934539079b" containerName="container-00" Jan 31 04:55:11 crc kubenswrapper[4667]: I0131 04:55:11.314581 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2849dfb-294a-4fbf-b738-24934539079b" containerName="container-00" Jan 31 04:55:11 crc kubenswrapper[4667]: I0131 04:55:11.314892 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2849dfb-294a-4fbf-b738-24934539079b" containerName="container-00" Jan 31 04:55:11 crc kubenswrapper[4667]: I0131 04:55:11.317196 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z56m8" Jan 31 04:55:11 crc kubenswrapper[4667]: I0131 04:55:11.333547 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z56m8"] Jan 31 04:55:11 crc kubenswrapper[4667]: I0131 04:55:11.412432 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628-catalog-content\") pod \"community-operators-z56m8\" (UID: \"e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628\") " pod="openshift-marketplace/community-operators-z56m8" Jan 31 04:55:11 crc kubenswrapper[4667]: I0131 04:55:11.412468 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f59h6\" (UniqueName: \"kubernetes.io/projected/e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628-kube-api-access-f59h6\") pod \"community-operators-z56m8\" (UID: \"e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628\") " pod="openshift-marketplace/community-operators-z56m8" Jan 31 04:55:11 crc kubenswrapper[4667]: I0131 04:55:11.412544 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628-utilities\") pod \"community-operators-z56m8\" (UID: \"e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628\") " pod="openshift-marketplace/community-operators-z56m8" Jan 31 04:55:11 crc kubenswrapper[4667]: I0131 04:55:11.515963 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628-utilities\") pod \"community-operators-z56m8\" (UID: \"e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628\") " pod="openshift-marketplace/community-operators-z56m8" Jan 31 04:55:11 crc kubenswrapper[4667]: I0131 04:55:11.516122 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628-catalog-content\") pod \"community-operators-z56m8\" (UID: \"e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628\") " pod="openshift-marketplace/community-operators-z56m8" Jan 31 04:55:11 crc kubenswrapper[4667]: I0131 04:55:11.516143 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f59h6\" (UniqueName: \"kubernetes.io/projected/e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628-kube-api-access-f59h6\") pod \"community-operators-z56m8\" (UID: \"e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628\") " pod="openshift-marketplace/community-operators-z56m8" Jan 31 04:55:11 crc kubenswrapper[4667]: I0131 04:55:11.516431 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628-utilities\") pod \"community-operators-z56m8\" (UID: \"e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628\") " pod="openshift-marketplace/community-operators-z56m8" Jan 31 04:55:11 crc kubenswrapper[4667]: I0131 04:55:11.516616 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628-catalog-content\") pod \"community-operators-z56m8\" (UID: \"e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628\") " pod="openshift-marketplace/community-operators-z56m8" Jan 31 04:55:11 crc kubenswrapper[4667]: I0131 04:55:11.539181 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f59h6\" (UniqueName: \"kubernetes.io/projected/e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628-kube-api-access-f59h6\") pod \"community-operators-z56m8\" (UID: \"e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628\") " pod="openshift-marketplace/community-operators-z56m8" Jan 31 04:55:11 crc kubenswrapper[4667]: I0131 04:55:11.547084 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_aca13392-5591-4b68-9948-c5e5fe558803/setup-container/0.log" Jan 31 04:55:11 crc kubenswrapper[4667]: I0131 04:55:11.634565 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-xf5nb_65aa0404-25e7-4a24-8edf-ceae5320b02e/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 04:55:11 crc kubenswrapper[4667]: I0131 04:55:11.637044 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_aca13392-5591-4b68-9948-c5e5fe558803/rabbitmq/0.log" Jan 31 04:55:11 crc kubenswrapper[4667]: I0131 04:55:11.644823 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z56m8" Jan 31 04:55:12 crc kubenswrapper[4667]: I0131 04:55:12.048438 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-mkct2_c6e23bd4-49c8-4691-ab45-5426e6c3cc6f/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 04:55:12 crc kubenswrapper[4667]: I0131 04:55:12.092741 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-cfn5q_500e62ac-7319-4438-ab89-c072499f717c/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 04:55:12 crc kubenswrapper[4667]: I0131 04:55:12.190482 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z56m8"] Jan 31 04:55:12 crc kubenswrapper[4667]: I0131 04:55:12.584968 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-h2dbq_5a4621e8-915c-4f2f-b6fc-7dbccc69f5c8/ssh-known-hosts-edpm-deployment/0.log" Jan 31 04:55:12 crc kubenswrapper[4667]: I0131 04:55:12.588315 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-8fd84_10997808-cd78-4267-b7a3-7ea36b948a60/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 04:55:12 crc kubenswrapper[4667]: I0131 04:55:12.880647 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-8bff87d99-j8cd2_30fc5b26-45dd-42f8-9a58-7ba07c5aa56a/proxy-server/0.log" Jan 31 04:55:13 crc kubenswrapper[4667]: I0131 04:55:13.055569 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-8bff87d99-j8cd2_30fc5b26-45dd-42f8-9a58-7ba07c5aa56a/proxy-httpd/0.log" Jan 31 04:55:13 crc kubenswrapper[4667]: I0131 04:55:13.057875 4667 generic.go:334] "Generic (PLEG): container finished" podID="e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628" containerID="794aba9f7976e5dbb5b5e215306a9bed264dd9565a807ac73af52b1d14dcdb62" exitCode=0 Jan 31 04:55:13 crc kubenswrapper[4667]: I0131 04:55:13.057911 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z56m8" event={"ID":"e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628","Type":"ContainerDied","Data":"794aba9f7976e5dbb5b5e215306a9bed264dd9565a807ac73af52b1d14dcdb62"} Jan 31 04:55:13 crc kubenswrapper[4667]: I0131 04:55:13.057936 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z56m8" event={"ID":"e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628","Type":"ContainerStarted","Data":"ec78f25b22d59fece7a862a695f4956d6e39a8818a15e0d795904fc705933d7f"} Jan 31 04:55:13 crc kubenswrapper[4667]: I0131 04:55:13.060450 4667 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 04:55:14 crc kubenswrapper[4667]: I0131 04:55:13.136559 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-fpj9r_65cc9566-177a-41b5-b00c-83290fa14641/swift-ring-rebalance/0.log" Jan 31 04:55:14 crc kubenswrapper[4667]: I0131 04:55:13.409390 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49dfb349-068e-4574-9e26-3d413295d983/account-auditor/0.log" Jan 31 04:55:14 crc kubenswrapper[4667]: I0131 04:55:13.435081 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49dfb349-068e-4574-9e26-3d413295d983/account-replicator/0.log" Jan 31 04:55:14 crc kubenswrapper[4667]: I0131 04:55:13.451281 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49dfb349-068e-4574-9e26-3d413295d983/account-reaper/0.log" Jan 31 04:55:14 crc kubenswrapper[4667]: I0131 04:55:13.572990 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49dfb349-068e-4574-9e26-3d413295d983/account-server/0.log" Jan 31 04:55:14 crc kubenswrapper[4667]: I0131 04:55:13.655208 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49dfb349-068e-4574-9e26-3d413295d983/container-auditor/0.log" Jan 31 04:55:14 crc kubenswrapper[4667]: I0131 04:55:13.711541 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49dfb349-068e-4574-9e26-3d413295d983/container-replicator/0.log" Jan 31 04:55:14 crc kubenswrapper[4667]: I0131 04:55:13.742249 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49dfb349-068e-4574-9e26-3d413295d983/container-server/0.log" Jan 31 04:55:14 crc kubenswrapper[4667]: I0131 04:55:13.813183 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49dfb349-068e-4574-9e26-3d413295d983/container-updater/0.log" Jan 31 04:55:14 crc kubenswrapper[4667]: I0131 04:55:13.918098 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49dfb349-068e-4574-9e26-3d413295d983/object-auditor/0.log" Jan 31 04:55:14 crc kubenswrapper[4667]: I0131 04:55:13.970267 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49dfb349-068e-4574-9e26-3d413295d983/object-replicator/0.log" Jan 31 04:55:14 crc kubenswrapper[4667]: I0131 04:55:14.063975 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49dfb349-068e-4574-9e26-3d413295d983/object-expirer/0.log" Jan 31 04:55:14 crc kubenswrapper[4667]: I0131 04:55:14.089355 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49dfb349-068e-4574-9e26-3d413295d983/object-server/0.log" Jan 31 04:55:14 crc kubenswrapper[4667]: I0131 04:55:14.223278 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49dfb349-068e-4574-9e26-3d413295d983/object-updater/0.log" Jan 31 04:55:14 crc kubenswrapper[4667]: I0131 04:55:14.326271 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49dfb349-068e-4574-9e26-3d413295d983/rsync/0.log" Jan 31 04:55:14 crc kubenswrapper[4667]: I0131 04:55:14.420426 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_49dfb349-068e-4574-9e26-3d413295d983/swift-recon-cron/0.log" Jan 31 04:55:14 crc kubenswrapper[4667]: I0131 04:55:14.600005 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-fnj8c_c2249d9c-021c-4dbf-8770-767be19d9404/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 04:55:14 crc kubenswrapper[4667]: I0131 04:55:14.684487 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_6f4da9b8-1fb2-4d7c-b933-d5749919e9d1/tempest-tests-tempest-tests-runner/0.log" Jan 31 04:55:14 crc kubenswrapper[4667]: I0131 04:55:14.841393 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_dd17b156-9377-4bd0-ab7d-80b57f81c79c/test-operator-logs-container/0.log" Jan 31 04:55:14 crc kubenswrapper[4667]: I0131 04:55:14.948692 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-972xp_39f585ed-5556-4f88-b5c0-3b6da9807764/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 31 04:55:15 crc kubenswrapper[4667]: I0131 04:55:15.861151 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-spp9g"] Jan 31 04:55:15 crc kubenswrapper[4667]: I0131 04:55:15.863163 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-spp9g" Jan 31 04:55:15 crc kubenswrapper[4667]: I0131 04:55:15.875692 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-spp9g"] Jan 31 04:55:16 crc kubenswrapper[4667]: I0131 04:55:16.005751 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f97lk\" (UniqueName: \"kubernetes.io/projected/6a240075-9cdd-4306-a510-c2d435d72723-kube-api-access-f97lk\") pod \"redhat-operators-spp9g\" (UID: \"6a240075-9cdd-4306-a510-c2d435d72723\") " pod="openshift-marketplace/redhat-operators-spp9g" Jan 31 04:55:16 crc kubenswrapper[4667]: I0131 04:55:16.005804 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a240075-9cdd-4306-a510-c2d435d72723-utilities\") pod \"redhat-operators-spp9g\" (UID: \"6a240075-9cdd-4306-a510-c2d435d72723\") " pod="openshift-marketplace/redhat-operators-spp9g" Jan 31 04:55:16 crc kubenswrapper[4667]: I0131 04:55:16.005856 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a240075-9cdd-4306-a510-c2d435d72723-catalog-content\") pod \"redhat-operators-spp9g\" (UID: \"6a240075-9cdd-4306-a510-c2d435d72723\") " pod="openshift-marketplace/redhat-operators-spp9g" Jan 31 04:55:16 crc kubenswrapper[4667]: I0131 04:55:16.085666 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z56m8" event={"ID":"e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628","Type":"ContainerStarted","Data":"1e0f4ebeebe6983a0a1dfcdc95de4f03137c66e8577d8a0842eac8f07503c611"} Jan 31 04:55:16 crc kubenswrapper[4667]: I0131 04:55:16.107975 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f97lk\" (UniqueName: \"kubernetes.io/projected/6a240075-9cdd-4306-a510-c2d435d72723-kube-api-access-f97lk\") pod \"redhat-operators-spp9g\" (UID: \"6a240075-9cdd-4306-a510-c2d435d72723\") " pod="openshift-marketplace/redhat-operators-spp9g" Jan 31 04:55:16 crc kubenswrapper[4667]: I0131 04:55:16.108031 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a240075-9cdd-4306-a510-c2d435d72723-utilities\") pod \"redhat-operators-spp9g\" (UID: \"6a240075-9cdd-4306-a510-c2d435d72723\") " pod="openshift-marketplace/redhat-operators-spp9g" Jan 31 04:55:16 crc kubenswrapper[4667]: I0131 04:55:16.108062 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a240075-9cdd-4306-a510-c2d435d72723-catalog-content\") pod \"redhat-operators-spp9g\" (UID: \"6a240075-9cdd-4306-a510-c2d435d72723\") " pod="openshift-marketplace/redhat-operators-spp9g" Jan 31 04:55:16 crc kubenswrapper[4667]: I0131 04:55:16.108624 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a240075-9cdd-4306-a510-c2d435d72723-catalog-content\") pod \"redhat-operators-spp9g\" (UID: \"6a240075-9cdd-4306-a510-c2d435d72723\") " pod="openshift-marketplace/redhat-operators-spp9g" Jan 31 04:55:16 crc kubenswrapper[4667]: I0131 04:55:16.109182 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a240075-9cdd-4306-a510-c2d435d72723-utilities\") pod \"redhat-operators-spp9g\" (UID: \"6a240075-9cdd-4306-a510-c2d435d72723\") " pod="openshift-marketplace/redhat-operators-spp9g" Jan 31 04:55:16 crc kubenswrapper[4667]: I0131 04:55:16.127245 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f97lk\" (UniqueName: \"kubernetes.io/projected/6a240075-9cdd-4306-a510-c2d435d72723-kube-api-access-f97lk\") pod \"redhat-operators-spp9g\" (UID: \"6a240075-9cdd-4306-a510-c2d435d72723\") " pod="openshift-marketplace/redhat-operators-spp9g" Jan 31 04:55:16 crc kubenswrapper[4667]: I0131 04:55:16.180095 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-spp9g" Jan 31 04:55:16 crc kubenswrapper[4667]: I0131 04:55:16.725293 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-spp9g"] Jan 31 04:55:17 crc kubenswrapper[4667]: I0131 04:55:17.100600 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spp9g" event={"ID":"6a240075-9cdd-4306-a510-c2d435d72723","Type":"ContainerStarted","Data":"b86a73e2ada60d99eef8267ac10eaa786aba1cfa9a965c2d434bc9b23f3d8253"} Jan 31 04:55:18 crc kubenswrapper[4667]: I0131 04:55:18.114684 4667 generic.go:334] "Generic (PLEG): container finished" podID="6a240075-9cdd-4306-a510-c2d435d72723" containerID="63ebd592ed99a851b5032cbe70e9a44f3f08f62ec8a7becc4886604f72a21931" exitCode=0 Jan 31 04:55:18 crc kubenswrapper[4667]: I0131 04:55:18.115416 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spp9g" event={"ID":"6a240075-9cdd-4306-a510-c2d435d72723","Type":"ContainerDied","Data":"63ebd592ed99a851b5032cbe70e9a44f3f08f62ec8a7becc4886604f72a21931"} Jan 31 04:55:19 crc kubenswrapper[4667]: I0131 04:55:19.127870 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spp9g" event={"ID":"6a240075-9cdd-4306-a510-c2d435d72723","Type":"ContainerStarted","Data":"4ce48346716c4b6d733e394bff68f0296ec0ec6617ab39fd37857bc4e16fd354"} Jan 31 04:55:21 crc kubenswrapper[4667]: I0131 04:55:21.154954 4667 generic.go:334] "Generic (PLEG): container finished" podID="e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628" containerID="1e0f4ebeebe6983a0a1dfcdc95de4f03137c66e8577d8a0842eac8f07503c611" exitCode=0 Jan 31 04:55:21 crc kubenswrapper[4667]: I0131 04:55:21.155316 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z56m8" event={"ID":"e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628","Type":"ContainerDied","Data":"1e0f4ebeebe6983a0a1dfcdc95de4f03137c66e8577d8a0842eac8f07503c611"} Jan 31 04:55:23 crc kubenswrapper[4667]: I0131 04:55:23.174576 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z56m8" event={"ID":"e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628","Type":"ContainerStarted","Data":"67a2b6c28ffdf26df33f28bee35e8af3a5dcf6f7e5e63761db11ac4ffd445ffe"} Jan 31 04:55:23 crc kubenswrapper[4667]: I0131 04:55:23.239821 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z56m8" podStartSLOduration=2.811160051 podStartE2EDuration="12.239800176s" podCreationTimestamp="2026-01-31 04:55:11 +0000 UTC" firstStartedPulling="2026-01-31 04:55:13.060248333 +0000 UTC m=+4036.576583632" lastFinishedPulling="2026-01-31 04:55:22.488888458 +0000 UTC m=+4046.005223757" observedRunningTime="2026-01-31 04:55:23.224916046 +0000 UTC m=+4046.741251345" watchObservedRunningTime="2026-01-31 04:55:23.239800176 +0000 UTC m=+4046.756135475" Jan 31 04:55:23 crc kubenswrapper[4667]: I0131 04:55:23.285286 4667 scope.go:117] "RemoveContainer" containerID="3e5f360efff2cb2fbf8b3bd6a7305f45746603d4236d552d6b00acba8bf03353" Jan 31 04:55:23 crc kubenswrapper[4667]: E0131 04:55:23.285571 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:55:26 crc kubenswrapper[4667]: I0131 04:55:26.076166 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_23e21efc-a978-4734-9fe2-f210ab9952f5/memcached/0.log" Jan 31 04:55:27 crc kubenswrapper[4667]: I0131 04:55:27.214188 4667 generic.go:334] "Generic (PLEG): container finished" podID="6a240075-9cdd-4306-a510-c2d435d72723" containerID="4ce48346716c4b6d733e394bff68f0296ec0ec6617ab39fd37857bc4e16fd354" exitCode=0 Jan 31 04:55:27 crc kubenswrapper[4667]: I0131 04:55:27.214277 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spp9g" event={"ID":"6a240075-9cdd-4306-a510-c2d435d72723","Type":"ContainerDied","Data":"4ce48346716c4b6d733e394bff68f0296ec0ec6617ab39fd37857bc4e16fd354"} Jan 31 04:55:28 crc kubenswrapper[4667]: I0131 04:55:28.222641 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spp9g" event={"ID":"6a240075-9cdd-4306-a510-c2d435d72723","Type":"ContainerStarted","Data":"2437e3226e77594624c3f0f41dff8400afc6a907c267d22225cae278f79ff53e"} Jan 31 04:55:28 crc kubenswrapper[4667]: I0131 04:55:28.259221 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-spp9g" podStartSLOduration=3.569833173 podStartE2EDuration="13.25920337s" podCreationTimestamp="2026-01-31 04:55:15 +0000 UTC" firstStartedPulling="2026-01-31 04:55:18.118155635 +0000 UTC m=+4041.634490934" lastFinishedPulling="2026-01-31 04:55:27.807525822 +0000 UTC m=+4051.323861131" observedRunningTime="2026-01-31 04:55:28.25231552 +0000 UTC m=+4051.768650819" watchObservedRunningTime="2026-01-31 04:55:28.25920337 +0000 UTC m=+4051.775538669" Jan 31 04:55:31 crc kubenswrapper[4667]: I0131 04:55:31.645386 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z56m8" Jan 31 04:55:31 crc kubenswrapper[4667]: I0131 04:55:31.646967 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z56m8" Jan 31 04:55:32 crc kubenswrapper[4667]: I0131 04:55:32.697238 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-z56m8" podUID="e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628" containerName="registry-server" probeResult="failure" output=< Jan 31 04:55:32 crc kubenswrapper[4667]: timeout: failed to connect service ":50051" within 1s Jan 31 04:55:32 crc kubenswrapper[4667]: > Jan 31 04:55:36 crc kubenswrapper[4667]: I0131 04:55:36.180991 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-spp9g" Jan 31 04:55:36 crc kubenswrapper[4667]: I0131 04:55:36.182558 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-spp9g" Jan 31 04:55:37 crc kubenswrapper[4667]: I0131 04:55:37.719218 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-spp9g" podUID="6a240075-9cdd-4306-a510-c2d435d72723" containerName="registry-server" probeResult="failure" output=< Jan 31 04:55:37 crc kubenswrapper[4667]: timeout: failed to connect service ":50051" within 1s Jan 31 04:55:37 crc kubenswrapper[4667]: > Jan 31 04:55:38 crc kubenswrapper[4667]: I0131 04:55:38.282532 4667 scope.go:117] "RemoveContainer" containerID="3e5f360efff2cb2fbf8b3bd6a7305f45746603d4236d552d6b00acba8bf03353" Jan 31 04:55:38 crc kubenswrapper[4667]: E0131 04:55:38.283136 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:55:41 crc kubenswrapper[4667]: I0131 04:55:41.703104 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z56m8" Jan 31 04:55:41 crc kubenswrapper[4667]: I0131 04:55:41.753319 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z56m8" Jan 31 04:55:42 crc kubenswrapper[4667]: I0131 04:55:42.489951 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z56m8"] Jan 31 04:55:43 crc kubenswrapper[4667]: I0131 04:55:43.375293 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z56m8" podUID="e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628" containerName="registry-server" containerID="cri-o://67a2b6c28ffdf26df33f28bee35e8af3a5dcf6f7e5e63761db11ac4ffd445ffe" gracePeriod=2 Jan 31 04:55:43 crc kubenswrapper[4667]: E0131 04:55:43.496851 4667 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3ba2ec4_e18e_4fc5_98b6_4cde7a7dc628.slice/crio-conmon-67a2b6c28ffdf26df33f28bee35e8af3a5dcf6f7e5e63761db11ac4ffd445ffe.scope\": RecentStats: unable to find data in memory cache]" Jan 31 04:55:43 crc kubenswrapper[4667]: I0131 04:55:43.936087 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z56m8" Jan 31 04:55:44 crc kubenswrapper[4667]: I0131 04:55:44.104619 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628-catalog-content\") pod \"e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628\" (UID: \"e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628\") " Jan 31 04:55:44 crc kubenswrapper[4667]: I0131 04:55:44.104772 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f59h6\" (UniqueName: \"kubernetes.io/projected/e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628-kube-api-access-f59h6\") pod \"e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628\" (UID: \"e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628\") " Jan 31 04:55:44 crc kubenswrapper[4667]: I0131 04:55:44.104925 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628-utilities\") pod \"e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628\" (UID: \"e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628\") " Jan 31 04:55:44 crc kubenswrapper[4667]: I0131 04:55:44.106531 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628-utilities" (OuterVolumeSpecName: "utilities") pod "e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628" (UID: "e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:55:44 crc kubenswrapper[4667]: I0131 04:55:44.113377 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628-kube-api-access-f59h6" (OuterVolumeSpecName: "kube-api-access-f59h6") pod "e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628" (UID: "e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628"). InnerVolumeSpecName "kube-api-access-f59h6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:55:44 crc kubenswrapper[4667]: I0131 04:55:44.174487 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628" (UID: "e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:55:44 crc kubenswrapper[4667]: I0131 04:55:44.207680 4667 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:55:44 crc kubenswrapper[4667]: I0131 04:55:44.207727 4667 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:55:44 crc kubenswrapper[4667]: I0131 04:55:44.207745 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f59h6\" (UniqueName: \"kubernetes.io/projected/e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628-kube-api-access-f59h6\") on node \"crc\" DevicePath \"\"" Jan 31 04:55:44 crc kubenswrapper[4667]: I0131 04:55:44.385884 4667 generic.go:334] "Generic (PLEG): container finished" podID="e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628" containerID="67a2b6c28ffdf26df33f28bee35e8af3a5dcf6f7e5e63761db11ac4ffd445ffe" exitCode=0 Jan 31 04:55:44 crc kubenswrapper[4667]: I0131 04:55:44.385950 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z56m8" Jan 31 04:55:44 crc kubenswrapper[4667]: I0131 04:55:44.385973 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z56m8" event={"ID":"e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628","Type":"ContainerDied","Data":"67a2b6c28ffdf26df33f28bee35e8af3a5dcf6f7e5e63761db11ac4ffd445ffe"} Jan 31 04:55:44 crc kubenswrapper[4667]: I0131 04:55:44.387054 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z56m8" event={"ID":"e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628","Type":"ContainerDied","Data":"ec78f25b22d59fece7a862a695f4956d6e39a8818a15e0d795904fc705933d7f"} Jan 31 04:55:44 crc kubenswrapper[4667]: I0131 04:55:44.387110 4667 scope.go:117] "RemoveContainer" containerID="67a2b6c28ffdf26df33f28bee35e8af3a5dcf6f7e5e63761db11ac4ffd445ffe" Jan 31 04:55:44 crc kubenswrapper[4667]: I0131 04:55:44.415188 4667 scope.go:117] "RemoveContainer" containerID="1e0f4ebeebe6983a0a1dfcdc95de4f03137c66e8577d8a0842eac8f07503c611" Jan 31 04:55:44 crc kubenswrapper[4667]: I0131 04:55:44.422267 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z56m8"] Jan 31 04:55:44 crc kubenswrapper[4667]: I0131 04:55:44.431224 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-z56m8"] Jan 31 04:55:44 crc kubenswrapper[4667]: I0131 04:55:44.438675 4667 scope.go:117] "RemoveContainer" containerID="794aba9f7976e5dbb5b5e215306a9bed264dd9565a807ac73af52b1d14dcdb62" Jan 31 04:55:44 crc kubenswrapper[4667]: I0131 04:55:44.503207 4667 scope.go:117] "RemoveContainer" containerID="67a2b6c28ffdf26df33f28bee35e8af3a5dcf6f7e5e63761db11ac4ffd445ffe" Jan 31 04:55:44 crc kubenswrapper[4667]: E0131 04:55:44.504001 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67a2b6c28ffdf26df33f28bee35e8af3a5dcf6f7e5e63761db11ac4ffd445ffe\": container with ID starting with 67a2b6c28ffdf26df33f28bee35e8af3a5dcf6f7e5e63761db11ac4ffd445ffe not found: ID does not exist" containerID="67a2b6c28ffdf26df33f28bee35e8af3a5dcf6f7e5e63761db11ac4ffd445ffe" Jan 31 04:55:44 crc kubenswrapper[4667]: I0131 04:55:44.504038 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67a2b6c28ffdf26df33f28bee35e8af3a5dcf6f7e5e63761db11ac4ffd445ffe"} err="failed to get container status \"67a2b6c28ffdf26df33f28bee35e8af3a5dcf6f7e5e63761db11ac4ffd445ffe\": rpc error: code = NotFound desc = could not find container \"67a2b6c28ffdf26df33f28bee35e8af3a5dcf6f7e5e63761db11ac4ffd445ffe\": container with ID starting with 67a2b6c28ffdf26df33f28bee35e8af3a5dcf6f7e5e63761db11ac4ffd445ffe not found: ID does not exist" Jan 31 04:55:44 crc kubenswrapper[4667]: I0131 04:55:44.504064 4667 scope.go:117] "RemoveContainer" containerID="1e0f4ebeebe6983a0a1dfcdc95de4f03137c66e8577d8a0842eac8f07503c611" Jan 31 04:55:44 crc kubenswrapper[4667]: E0131 04:55:44.504607 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e0f4ebeebe6983a0a1dfcdc95de4f03137c66e8577d8a0842eac8f07503c611\": container with ID starting with 1e0f4ebeebe6983a0a1dfcdc95de4f03137c66e8577d8a0842eac8f07503c611 not found: ID does not exist" containerID="1e0f4ebeebe6983a0a1dfcdc95de4f03137c66e8577d8a0842eac8f07503c611" Jan 31 04:55:44 crc kubenswrapper[4667]: I0131 04:55:44.504632 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e0f4ebeebe6983a0a1dfcdc95de4f03137c66e8577d8a0842eac8f07503c611"} err="failed to get container status \"1e0f4ebeebe6983a0a1dfcdc95de4f03137c66e8577d8a0842eac8f07503c611\": rpc error: code = NotFound desc = could not find container \"1e0f4ebeebe6983a0a1dfcdc95de4f03137c66e8577d8a0842eac8f07503c611\": container with ID starting with 1e0f4ebeebe6983a0a1dfcdc95de4f03137c66e8577d8a0842eac8f07503c611 not found: ID does not exist" Jan 31 04:55:44 crc kubenswrapper[4667]: I0131 04:55:44.504650 4667 scope.go:117] "RemoveContainer" containerID="794aba9f7976e5dbb5b5e215306a9bed264dd9565a807ac73af52b1d14dcdb62" Jan 31 04:55:44 crc kubenswrapper[4667]: E0131 04:55:44.504991 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"794aba9f7976e5dbb5b5e215306a9bed264dd9565a807ac73af52b1d14dcdb62\": container with ID starting with 794aba9f7976e5dbb5b5e215306a9bed264dd9565a807ac73af52b1d14dcdb62 not found: ID does not exist" containerID="794aba9f7976e5dbb5b5e215306a9bed264dd9565a807ac73af52b1d14dcdb62" Jan 31 04:55:44 crc kubenswrapper[4667]: I0131 04:55:44.505020 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"794aba9f7976e5dbb5b5e215306a9bed264dd9565a807ac73af52b1d14dcdb62"} err="failed to get container status \"794aba9f7976e5dbb5b5e215306a9bed264dd9565a807ac73af52b1d14dcdb62\": rpc error: code = NotFound desc = could not find container \"794aba9f7976e5dbb5b5e215306a9bed264dd9565a807ac73af52b1d14dcdb62\": container with ID starting with 794aba9f7976e5dbb5b5e215306a9bed264dd9565a807ac73af52b1d14dcdb62 not found: ID does not exist" Jan 31 04:55:45 crc kubenswrapper[4667]: I0131 04:55:45.292618 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628" path="/var/lib/kubelet/pods/e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628/volumes" Jan 31 04:55:47 crc kubenswrapper[4667]: I0131 04:55:47.227761 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-spp9g" podUID="6a240075-9cdd-4306-a510-c2d435d72723" containerName="registry-server" probeResult="failure" output=< Jan 31 04:55:47 crc kubenswrapper[4667]: timeout: failed to connect service ":50051" within 1s Jan 31 04:55:47 crc kubenswrapper[4667]: > Jan 31 04:55:50 crc kubenswrapper[4667]: I0131 04:55:50.856177 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_caeab9432d2d716ddc2226985785c2befc1b94ceca4ba368762fb3e0362hwqr_770c15b8-5980-4cf9-91c7-11b2ded11b60/util/0.log" Jan 31 04:55:51 crc kubenswrapper[4667]: I0131 04:55:51.172882 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_caeab9432d2d716ddc2226985785c2befc1b94ceca4ba368762fb3e0362hwqr_770c15b8-5980-4cf9-91c7-11b2ded11b60/util/0.log" Jan 31 04:55:51 crc kubenswrapper[4667]: I0131 04:55:51.250072 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_caeab9432d2d716ddc2226985785c2befc1b94ceca4ba368762fb3e0362hwqr_770c15b8-5980-4cf9-91c7-11b2ded11b60/pull/0.log" Jan 31 04:55:51 crc kubenswrapper[4667]: I0131 04:55:51.282109 4667 scope.go:117] "RemoveContainer" containerID="3e5f360efff2cb2fbf8b3bd6a7305f45746603d4236d552d6b00acba8bf03353" Jan 31 04:55:51 crc kubenswrapper[4667]: E0131 04:55:51.282439 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:55:51 crc kubenswrapper[4667]: I0131 04:55:51.295309 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_caeab9432d2d716ddc2226985785c2befc1b94ceca4ba368762fb3e0362hwqr_770c15b8-5980-4cf9-91c7-11b2ded11b60/pull/0.log" Jan 31 04:55:51 crc kubenswrapper[4667]: I0131 04:55:51.593554 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_caeab9432d2d716ddc2226985785c2befc1b94ceca4ba368762fb3e0362hwqr_770c15b8-5980-4cf9-91c7-11b2ded11b60/pull/0.log" Jan 31 04:55:51 crc kubenswrapper[4667]: I0131 04:55:51.655117 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_caeab9432d2d716ddc2226985785c2befc1b94ceca4ba368762fb3e0362hwqr_770c15b8-5980-4cf9-91c7-11b2ded11b60/util/0.log" Jan 31 04:55:51 crc kubenswrapper[4667]: I0131 04:55:51.730099 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_caeab9432d2d716ddc2226985785c2befc1b94ceca4ba368762fb3e0362hwqr_770c15b8-5980-4cf9-91c7-11b2ded11b60/extract/0.log" Jan 31 04:55:52 crc kubenswrapper[4667]: I0131 04:55:52.025067 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-pqxkg_508d212d-ccda-471c-94aa-96955a519e5a/manager/0.log" Jan 31 04:55:52 crc kubenswrapper[4667]: I0131 04:55:52.149831 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-2xtdf_5743730b-079b-4b07-a87b-932cd637e387/manager/0.log" Jan 31 04:55:52 crc kubenswrapper[4667]: I0131 04:55:52.473098 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-j629c_5280851f-6404-45ad-adc7-f41479cb7dc3/manager/0.log" Jan 31 04:55:52 crc kubenswrapper[4667]: I0131 04:55:52.726999 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-lzb8l_cfe9238d-7457-43f4-9933-cece048fc3fe/manager/0.log" Jan 31 04:55:53 crc kubenswrapper[4667]: I0131 04:55:53.236943 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-rfpnc_f26454ff-c920-4240-84dd-684272f0c0c8/manager/0.log" Jan 31 04:55:53 crc kubenswrapper[4667]: I0131 04:55:53.735807 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-6cf9z_5108f978-fa68-4add-9f97-5e02aec8c688/manager/0.log" Jan 31 04:55:53 crc kubenswrapper[4667]: I0131 04:55:53.976287 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-zswlt_47cf710a-e856-4094-8ef8-ff115631a236/manager/0.log" Jan 31 04:55:54 crc kubenswrapper[4667]: I0131 04:55:54.134598 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-hxstt_8a4eab04-25a1-4da9-8ee1-0243d4b69073/manager/0.log" Jan 31 04:55:54 crc kubenswrapper[4667]: I0131 04:55:54.335941 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-w6vd6_ed4fdc84-4fc5-4e5a-8959-b5ea977c9b56/manager/0.log" Jan 31 04:55:54 crc kubenswrapper[4667]: I0131 04:55:54.550257 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-wmmkk_f955fd59-24f1-42bb-81a8-c17e32274291/manager/0.log" Jan 31 04:55:54 crc kubenswrapper[4667]: I0131 04:55:54.936498 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-kf59p_1af3e556-130c-4530-89de-dd64852193c8/manager/0.log" Jan 31 04:55:55 crc kubenswrapper[4667]: I0131 04:55:55.340000 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-zq7nc_4dd3097d-038b-459b-be09-25e6a9c28379/manager/0.log" Jan 31 04:55:55 crc kubenswrapper[4667]: I0131 04:55:55.472084 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-2zj6j_645ed22c-c54e-495c-af4d-a63635f01dbc/manager/0.log" Jan 31 04:55:55 crc kubenswrapper[4667]: I0131 04:55:55.799154 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4d697dw_ad7389f5-4d9e-4a91-89b8-8f65e425fe83/manager/0.log" Jan 31 04:55:56 crc kubenswrapper[4667]: I0131 04:55:56.060046 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-77f687fc99-5bq4z_2b9c9fa2-4838-4c78-bcab-9bc723279049/operator/0.log" Jan 31 04:55:56 crc kubenswrapper[4667]: I0131 04:55:56.191051 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-zhdt9_28215a5c-c908-41e3-b138-1b26eaab9121/registry-server/0.log" Jan 31 04:55:56 crc kubenswrapper[4667]: I0131 04:55:56.579628 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-vpt7r_4955e603-5ae1-4c59-8f06-7e4c3f1cae70/manager/0.log" Jan 31 04:55:56 crc kubenswrapper[4667]: I0131 04:55:56.652236 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-lgk8x_aa7cd74d-218f-47a1-80f6-db8e475b1ba0/manager/0.log" Jan 31 04:55:57 crc kubenswrapper[4667]: I0131 04:55:57.165683 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-964b5_1f3ad0ee-dce4-4ed0-90f7-e2c195b6d099/operator/0.log" Jan 31 04:55:57 crc kubenswrapper[4667]: I0131 04:55:57.229041 4667 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-spp9g" podUID="6a240075-9cdd-4306-a510-c2d435d72723" containerName="registry-server" probeResult="failure" output=< Jan 31 04:55:57 crc kubenswrapper[4667]: timeout: failed to connect service ":50051" within 1s Jan 31 04:55:57 crc kubenswrapper[4667]: > Jan 31 04:55:57 crc kubenswrapper[4667]: I0131 04:55:57.444339 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-x46bg_fc224c93-299f-4f99-b16d-64ab47cb66a8/manager/0.log" Jan 31 04:55:57 crc kubenswrapper[4667]: I0131 04:55:57.977679 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-b4btm_7c71998a-5e4c-461c-96f9-3ff67b4619cd/manager/0.log" Jan 31 04:55:58 crc kubenswrapper[4667]: I0131 04:55:58.173217 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-fcd7f5fc5-pfnrd_5af1cf00-3340-481a-9312-cdd15cddbf5d/manager/0.log" Jan 31 04:55:58 crc kubenswrapper[4667]: I0131 04:55:58.223397 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-gzj6r_340a909d-7419-4721-be11-2c37a3a87022/manager/0.log" Jan 31 04:55:58 crc kubenswrapper[4667]: I0131 04:55:58.822923 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-fxzcm_675da051-c9cc-4817-9092-478b3d90d1bf/manager/0.log" Jan 31 04:55:59 crc kubenswrapper[4667]: I0131 04:55:59.420116 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-54fc54694b-t88kx_75fa830b-0948-4104-874f-332cb2ea9de2/manager/0.log" Jan 31 04:56:06 crc kubenswrapper[4667]: I0131 04:56:06.282352 4667 scope.go:117] "RemoveContainer" containerID="3e5f360efff2cb2fbf8b3bd6a7305f45746603d4236d552d6b00acba8bf03353" Jan 31 04:56:06 crc kubenswrapper[4667]: E0131 04:56:06.283309 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:56:06 crc kubenswrapper[4667]: I0131 04:56:06.592764 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-spp9g" Jan 31 04:56:06 crc kubenswrapper[4667]: I0131 04:56:06.651894 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-spp9g" Jan 31 04:56:06 crc kubenswrapper[4667]: I0131 04:56:06.830178 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-spp9g"] Jan 31 04:56:08 crc kubenswrapper[4667]: I0131 04:56:08.584740 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-spp9g" podUID="6a240075-9cdd-4306-a510-c2d435d72723" containerName="registry-server" containerID="cri-o://2437e3226e77594624c3f0f41dff8400afc6a907c267d22225cae278f79ff53e" gracePeriod=2 Jan 31 04:56:09 crc kubenswrapper[4667]: I0131 04:56:09.032537 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-spp9g" Jan 31 04:56:09 crc kubenswrapper[4667]: I0131 04:56:09.125316 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a240075-9cdd-4306-a510-c2d435d72723-utilities\") pod \"6a240075-9cdd-4306-a510-c2d435d72723\" (UID: \"6a240075-9cdd-4306-a510-c2d435d72723\") " Jan 31 04:56:09 crc kubenswrapper[4667]: I0131 04:56:09.125849 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a240075-9cdd-4306-a510-c2d435d72723-utilities" (OuterVolumeSpecName: "utilities") pod "6a240075-9cdd-4306-a510-c2d435d72723" (UID: "6a240075-9cdd-4306-a510-c2d435d72723"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:56:09 crc kubenswrapper[4667]: I0131 04:56:09.125955 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a240075-9cdd-4306-a510-c2d435d72723-catalog-content\") pod \"6a240075-9cdd-4306-a510-c2d435d72723\" (UID: \"6a240075-9cdd-4306-a510-c2d435d72723\") " Jan 31 04:56:09 crc kubenswrapper[4667]: I0131 04:56:09.130951 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f97lk\" (UniqueName: \"kubernetes.io/projected/6a240075-9cdd-4306-a510-c2d435d72723-kube-api-access-f97lk\") pod \"6a240075-9cdd-4306-a510-c2d435d72723\" (UID: \"6a240075-9cdd-4306-a510-c2d435d72723\") " Jan 31 04:56:09 crc kubenswrapper[4667]: I0131 04:56:09.131263 4667 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a240075-9cdd-4306-a510-c2d435d72723-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:56:09 crc kubenswrapper[4667]: I0131 04:56:09.154053 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a240075-9cdd-4306-a510-c2d435d72723-kube-api-access-f97lk" (OuterVolumeSpecName: "kube-api-access-f97lk") pod "6a240075-9cdd-4306-a510-c2d435d72723" (UID: "6a240075-9cdd-4306-a510-c2d435d72723"). InnerVolumeSpecName "kube-api-access-f97lk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:56:09 crc kubenswrapper[4667]: I0131 04:56:09.232444 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f97lk\" (UniqueName: \"kubernetes.io/projected/6a240075-9cdd-4306-a510-c2d435d72723-kube-api-access-f97lk\") on node \"crc\" DevicePath \"\"" Jan 31 04:56:09 crc kubenswrapper[4667]: I0131 04:56:09.249485 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a240075-9cdd-4306-a510-c2d435d72723-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6a240075-9cdd-4306-a510-c2d435d72723" (UID: "6a240075-9cdd-4306-a510-c2d435d72723"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:56:09 crc kubenswrapper[4667]: I0131 04:56:09.345645 4667 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a240075-9cdd-4306-a510-c2d435d72723-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:56:09 crc kubenswrapper[4667]: I0131 04:56:09.594043 4667 generic.go:334] "Generic (PLEG): container finished" podID="6a240075-9cdd-4306-a510-c2d435d72723" containerID="2437e3226e77594624c3f0f41dff8400afc6a907c267d22225cae278f79ff53e" exitCode=0 Jan 31 04:56:09 crc kubenswrapper[4667]: I0131 04:56:09.594124 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-spp9g" Jan 31 04:56:09 crc kubenswrapper[4667]: I0131 04:56:09.594132 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spp9g" event={"ID":"6a240075-9cdd-4306-a510-c2d435d72723","Type":"ContainerDied","Data":"2437e3226e77594624c3f0f41dff8400afc6a907c267d22225cae278f79ff53e"} Jan 31 04:56:09 crc kubenswrapper[4667]: I0131 04:56:09.594457 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spp9g" event={"ID":"6a240075-9cdd-4306-a510-c2d435d72723","Type":"ContainerDied","Data":"b86a73e2ada60d99eef8267ac10eaa786aba1cfa9a965c2d434bc9b23f3d8253"} Jan 31 04:56:09 crc kubenswrapper[4667]: I0131 04:56:09.594477 4667 scope.go:117] "RemoveContainer" containerID="2437e3226e77594624c3f0f41dff8400afc6a907c267d22225cae278f79ff53e" Jan 31 04:56:09 crc kubenswrapper[4667]: I0131 04:56:09.614176 4667 scope.go:117] "RemoveContainer" containerID="4ce48346716c4b6d733e394bff68f0296ec0ec6617ab39fd37857bc4e16fd354" Jan 31 04:56:09 crc kubenswrapper[4667]: I0131 04:56:09.622343 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-spp9g"] Jan 31 04:56:09 crc kubenswrapper[4667]: I0131 04:56:09.632109 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-spp9g"] Jan 31 04:56:09 crc kubenswrapper[4667]: I0131 04:56:09.634695 4667 scope.go:117] "RemoveContainer" containerID="63ebd592ed99a851b5032cbe70e9a44f3f08f62ec8a7becc4886604f72a21931" Jan 31 04:56:09 crc kubenswrapper[4667]: I0131 04:56:09.685216 4667 scope.go:117] "RemoveContainer" containerID="2437e3226e77594624c3f0f41dff8400afc6a907c267d22225cae278f79ff53e" Jan 31 04:56:09 crc kubenswrapper[4667]: E0131 04:56:09.685653 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2437e3226e77594624c3f0f41dff8400afc6a907c267d22225cae278f79ff53e\": container with ID starting with 2437e3226e77594624c3f0f41dff8400afc6a907c267d22225cae278f79ff53e not found: ID does not exist" containerID="2437e3226e77594624c3f0f41dff8400afc6a907c267d22225cae278f79ff53e" Jan 31 04:56:09 crc kubenswrapper[4667]: I0131 04:56:09.685696 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2437e3226e77594624c3f0f41dff8400afc6a907c267d22225cae278f79ff53e"} err="failed to get container status \"2437e3226e77594624c3f0f41dff8400afc6a907c267d22225cae278f79ff53e\": rpc error: code = NotFound desc = could not find container \"2437e3226e77594624c3f0f41dff8400afc6a907c267d22225cae278f79ff53e\": container with ID starting with 2437e3226e77594624c3f0f41dff8400afc6a907c267d22225cae278f79ff53e not found: ID does not exist" Jan 31 04:56:09 crc kubenswrapper[4667]: I0131 04:56:09.685720 4667 scope.go:117] "RemoveContainer" containerID="4ce48346716c4b6d733e394bff68f0296ec0ec6617ab39fd37857bc4e16fd354" Jan 31 04:56:09 crc kubenswrapper[4667]: E0131 04:56:09.685955 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ce48346716c4b6d733e394bff68f0296ec0ec6617ab39fd37857bc4e16fd354\": container with ID starting with 4ce48346716c4b6d733e394bff68f0296ec0ec6617ab39fd37857bc4e16fd354 not found: ID does not exist" containerID="4ce48346716c4b6d733e394bff68f0296ec0ec6617ab39fd37857bc4e16fd354" Jan 31 04:56:09 crc kubenswrapper[4667]: I0131 04:56:09.686000 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ce48346716c4b6d733e394bff68f0296ec0ec6617ab39fd37857bc4e16fd354"} err="failed to get container status \"4ce48346716c4b6d733e394bff68f0296ec0ec6617ab39fd37857bc4e16fd354\": rpc error: code = NotFound desc = could not find container \"4ce48346716c4b6d733e394bff68f0296ec0ec6617ab39fd37857bc4e16fd354\": container with ID starting with 4ce48346716c4b6d733e394bff68f0296ec0ec6617ab39fd37857bc4e16fd354 not found: ID does not exist" Jan 31 04:56:09 crc kubenswrapper[4667]: I0131 04:56:09.686017 4667 scope.go:117] "RemoveContainer" containerID="63ebd592ed99a851b5032cbe70e9a44f3f08f62ec8a7becc4886604f72a21931" Jan 31 04:56:09 crc kubenswrapper[4667]: E0131 04:56:09.686284 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63ebd592ed99a851b5032cbe70e9a44f3f08f62ec8a7becc4886604f72a21931\": container with ID starting with 63ebd592ed99a851b5032cbe70e9a44f3f08f62ec8a7becc4886604f72a21931 not found: ID does not exist" containerID="63ebd592ed99a851b5032cbe70e9a44f3f08f62ec8a7becc4886604f72a21931" Jan 31 04:56:09 crc kubenswrapper[4667]: I0131 04:56:09.686338 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63ebd592ed99a851b5032cbe70e9a44f3f08f62ec8a7becc4886604f72a21931"} err="failed to get container status \"63ebd592ed99a851b5032cbe70e9a44f3f08f62ec8a7becc4886604f72a21931\": rpc error: code = NotFound desc = could not find container \"63ebd592ed99a851b5032cbe70e9a44f3f08f62ec8a7becc4886604f72a21931\": container with ID starting with 63ebd592ed99a851b5032cbe70e9a44f3f08f62ec8a7becc4886604f72a21931 not found: ID does not exist" Jan 31 04:56:11 crc kubenswrapper[4667]: I0131 04:56:11.290785 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a240075-9cdd-4306-a510-c2d435d72723" path="/var/lib/kubelet/pods/6a240075-9cdd-4306-a510-c2d435d72723/volumes" Jan 31 04:56:18 crc kubenswrapper[4667]: I0131 04:56:18.281570 4667 scope.go:117] "RemoveContainer" containerID="3e5f360efff2cb2fbf8b3bd6a7305f45746603d4236d552d6b00acba8bf03353" Jan 31 04:56:18 crc kubenswrapper[4667]: E0131 04:56:18.282445 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:56:24 crc kubenswrapper[4667]: I0131 04:56:24.277471 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-stnvq_dbbace8c-06bb-4b50-a132-a681482dc9e5/control-plane-machine-set-operator/0.log" Jan 31 04:56:24 crc kubenswrapper[4667]: I0131 04:56:24.431977 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-zpjcj_83d090b3-311a-4b89-aa7d-de1ca0b237d6/kube-rbac-proxy/0.log" Jan 31 04:56:24 crc kubenswrapper[4667]: I0131 04:56:24.527753 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-zpjcj_83d090b3-311a-4b89-aa7d-de1ca0b237d6/machine-api-operator/0.log" Jan 31 04:56:31 crc kubenswrapper[4667]: I0131 04:56:31.282423 4667 scope.go:117] "RemoveContainer" containerID="3e5f360efff2cb2fbf8b3bd6a7305f45746603d4236d552d6b00acba8bf03353" Jan 31 04:56:31 crc kubenswrapper[4667]: E0131 04:56:31.283390 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:56:39 crc kubenswrapper[4667]: I0131 04:56:39.763482 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-fhjxn_79f310bb-9fe3-4e37-9c80-b5c218823271/cert-manager-controller/0.log" Jan 31 04:56:40 crc kubenswrapper[4667]: I0131 04:56:40.061263 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-z8c84_06350efe-2c60-4ce9-a58d-034636cc57db/cert-manager-cainjector/0.log" Jan 31 04:56:40 crc kubenswrapper[4667]: I0131 04:56:40.120939 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-2vkbp_3a2e920c-f3b6-4c7d-aeae-f8d88ce0a3b1/cert-manager-webhook/0.log" Jan 31 04:56:42 crc kubenswrapper[4667]: I0131 04:56:42.285480 4667 scope.go:117] "RemoveContainer" containerID="3e5f360efff2cb2fbf8b3bd6a7305f45746603d4236d552d6b00acba8bf03353" Jan 31 04:56:42 crc kubenswrapper[4667]: E0131 04:56:42.286114 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:56:43 crc kubenswrapper[4667]: I0131 04:56:43.452419 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-22gm5"] Jan 31 04:56:43 crc kubenswrapper[4667]: E0131 04:56:43.453607 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628" containerName="extract-content" Jan 31 04:56:43 crc kubenswrapper[4667]: I0131 04:56:43.453699 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628" containerName="extract-content" Jan 31 04:56:43 crc kubenswrapper[4667]: E0131 04:56:43.453779 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a240075-9cdd-4306-a510-c2d435d72723" containerName="registry-server" Jan 31 04:56:43 crc kubenswrapper[4667]: I0131 04:56:43.453862 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a240075-9cdd-4306-a510-c2d435d72723" containerName="registry-server" Jan 31 04:56:43 crc kubenswrapper[4667]: E0131 04:56:43.453956 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a240075-9cdd-4306-a510-c2d435d72723" containerName="extract-utilities" Jan 31 04:56:43 crc kubenswrapper[4667]: I0131 04:56:43.454049 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a240075-9cdd-4306-a510-c2d435d72723" containerName="extract-utilities" Jan 31 04:56:43 crc kubenswrapper[4667]: E0131 04:56:43.454128 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628" containerName="registry-server" Jan 31 04:56:43 crc kubenswrapper[4667]: I0131 04:56:43.454193 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628" containerName="registry-server" Jan 31 04:56:43 crc kubenswrapper[4667]: E0131 04:56:43.454285 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a240075-9cdd-4306-a510-c2d435d72723" containerName="extract-content" Jan 31 04:56:43 crc kubenswrapper[4667]: I0131 04:56:43.454352 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a240075-9cdd-4306-a510-c2d435d72723" containerName="extract-content" Jan 31 04:56:43 crc kubenswrapper[4667]: E0131 04:56:43.454425 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628" containerName="extract-utilities" Jan 31 04:56:43 crc kubenswrapper[4667]: I0131 04:56:43.454511 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628" containerName="extract-utilities" Jan 31 04:56:43 crc kubenswrapper[4667]: I0131 04:56:43.454793 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3ba2ec4-e18e-4fc5-98b6-4cde7a7dc628" containerName="registry-server" Jan 31 04:56:43 crc kubenswrapper[4667]: I0131 04:56:43.454953 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a240075-9cdd-4306-a510-c2d435d72723" containerName="registry-server" Jan 31 04:56:43 crc kubenswrapper[4667]: I0131 04:56:43.457276 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-22gm5" Jan 31 04:56:43 crc kubenswrapper[4667]: I0131 04:56:43.473387 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-22gm5"] Jan 31 04:56:43 crc kubenswrapper[4667]: I0131 04:56:43.519205 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l58sr\" (UniqueName: \"kubernetes.io/projected/33fee552-706c-48a5-bc09-93a38cfc8eee-kube-api-access-l58sr\") pod \"certified-operators-22gm5\" (UID: \"33fee552-706c-48a5-bc09-93a38cfc8eee\") " pod="openshift-marketplace/certified-operators-22gm5" Jan 31 04:56:43 crc kubenswrapper[4667]: I0131 04:56:43.519276 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33fee552-706c-48a5-bc09-93a38cfc8eee-catalog-content\") pod \"certified-operators-22gm5\" (UID: \"33fee552-706c-48a5-bc09-93a38cfc8eee\") " pod="openshift-marketplace/certified-operators-22gm5" Jan 31 04:56:43 crc kubenswrapper[4667]: I0131 04:56:43.519296 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33fee552-706c-48a5-bc09-93a38cfc8eee-utilities\") pod \"certified-operators-22gm5\" (UID: \"33fee552-706c-48a5-bc09-93a38cfc8eee\") " pod="openshift-marketplace/certified-operators-22gm5" Jan 31 04:56:43 crc kubenswrapper[4667]: I0131 04:56:43.621080 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l58sr\" (UniqueName: \"kubernetes.io/projected/33fee552-706c-48a5-bc09-93a38cfc8eee-kube-api-access-l58sr\") pod \"certified-operators-22gm5\" (UID: \"33fee552-706c-48a5-bc09-93a38cfc8eee\") " pod="openshift-marketplace/certified-operators-22gm5" Jan 31 04:56:43 crc kubenswrapper[4667]: I0131 04:56:43.621315 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33fee552-706c-48a5-bc09-93a38cfc8eee-catalog-content\") pod \"certified-operators-22gm5\" (UID: \"33fee552-706c-48a5-bc09-93a38cfc8eee\") " pod="openshift-marketplace/certified-operators-22gm5" Jan 31 04:56:43 crc kubenswrapper[4667]: I0131 04:56:43.621706 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33fee552-706c-48a5-bc09-93a38cfc8eee-catalog-content\") pod \"certified-operators-22gm5\" (UID: \"33fee552-706c-48a5-bc09-93a38cfc8eee\") " pod="openshift-marketplace/certified-operators-22gm5" Jan 31 04:56:43 crc kubenswrapper[4667]: I0131 04:56:43.621757 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33fee552-706c-48a5-bc09-93a38cfc8eee-utilities\") pod \"certified-operators-22gm5\" (UID: \"33fee552-706c-48a5-bc09-93a38cfc8eee\") " pod="openshift-marketplace/certified-operators-22gm5" Jan 31 04:56:43 crc kubenswrapper[4667]: I0131 04:56:43.622076 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33fee552-706c-48a5-bc09-93a38cfc8eee-utilities\") pod \"certified-operators-22gm5\" (UID: \"33fee552-706c-48a5-bc09-93a38cfc8eee\") " pod="openshift-marketplace/certified-operators-22gm5" Jan 31 04:56:43 crc kubenswrapper[4667]: I0131 04:56:43.641046 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l58sr\" (UniqueName: \"kubernetes.io/projected/33fee552-706c-48a5-bc09-93a38cfc8eee-kube-api-access-l58sr\") pod \"certified-operators-22gm5\" (UID: \"33fee552-706c-48a5-bc09-93a38cfc8eee\") " pod="openshift-marketplace/certified-operators-22gm5" Jan 31 04:56:43 crc kubenswrapper[4667]: I0131 04:56:43.777519 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-22gm5" Jan 31 04:56:44 crc kubenswrapper[4667]: I0131 04:56:44.273435 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-22gm5"] Jan 31 04:56:44 crc kubenswrapper[4667]: I0131 04:56:44.915440 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-22gm5" event={"ID":"33fee552-706c-48a5-bc09-93a38cfc8eee","Type":"ContainerStarted","Data":"239568186f0577a40fdf29f75e896ce0de33a1f9a3c8d9fc4675b426e51a8008"} Jan 31 04:56:45 crc kubenswrapper[4667]: E0131 04:56:45.204626 4667 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33fee552_706c_48a5_bc09_93a38cfc8eee.slice/crio-3c0e50faac915dfb64a7a70eb931c1d9069abf2ddb63e174ebc1eaf9edb6f4d4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33fee552_706c_48a5_bc09_93a38cfc8eee.slice/crio-conmon-3c0e50faac915dfb64a7a70eb931c1d9069abf2ddb63e174ebc1eaf9edb6f4d4.scope\": RecentStats: unable to find data in memory cache]" Jan 31 04:56:45 crc kubenswrapper[4667]: I0131 04:56:45.926353 4667 generic.go:334] "Generic (PLEG): container finished" podID="33fee552-706c-48a5-bc09-93a38cfc8eee" containerID="3c0e50faac915dfb64a7a70eb931c1d9069abf2ddb63e174ebc1eaf9edb6f4d4" exitCode=0 Jan 31 04:56:45 crc kubenswrapper[4667]: I0131 04:56:45.926692 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-22gm5" event={"ID":"33fee552-706c-48a5-bc09-93a38cfc8eee","Type":"ContainerDied","Data":"3c0e50faac915dfb64a7a70eb931c1d9069abf2ddb63e174ebc1eaf9edb6f4d4"} Jan 31 04:56:47 crc kubenswrapper[4667]: I0131 04:56:47.945745 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-22gm5" event={"ID":"33fee552-706c-48a5-bc09-93a38cfc8eee","Type":"ContainerStarted","Data":"f4e17f3632c9200deb3489a2664e4e318e10fdba63371cca05a5cc3e5d64c241"} Jan 31 04:56:48 crc kubenswrapper[4667]: I0131 04:56:48.956445 4667 generic.go:334] "Generic (PLEG): container finished" podID="33fee552-706c-48a5-bc09-93a38cfc8eee" containerID="f4e17f3632c9200deb3489a2664e4e318e10fdba63371cca05a5cc3e5d64c241" exitCode=0 Jan 31 04:56:48 crc kubenswrapper[4667]: I0131 04:56:48.956497 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-22gm5" event={"ID":"33fee552-706c-48a5-bc09-93a38cfc8eee","Type":"ContainerDied","Data":"f4e17f3632c9200deb3489a2664e4e318e10fdba63371cca05a5cc3e5d64c241"} Jan 31 04:56:49 crc kubenswrapper[4667]: I0131 04:56:49.981201 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-22gm5" event={"ID":"33fee552-706c-48a5-bc09-93a38cfc8eee","Type":"ContainerStarted","Data":"22a3d6af6d1ac1b2ab6601457043358b4ff1e815fd9155f4a1f8835070a1cf63"} Jan 31 04:56:50 crc kubenswrapper[4667]: I0131 04:56:50.010344 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-22gm5" podStartSLOduration=3.580734178 podStartE2EDuration="7.010325164s" podCreationTimestamp="2026-01-31 04:56:43 +0000 UTC" firstStartedPulling="2026-01-31 04:56:45.930059243 +0000 UTC m=+4129.446394532" lastFinishedPulling="2026-01-31 04:56:49.359650219 +0000 UTC m=+4132.875985518" observedRunningTime="2026-01-31 04:56:49.998489785 +0000 UTC m=+4133.514825074" watchObservedRunningTime="2026-01-31 04:56:50.010325164 +0000 UTC m=+4133.526660453" Jan 31 04:56:53 crc kubenswrapper[4667]: I0131 04:56:53.779227 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-22gm5" Jan 31 04:56:53 crc kubenswrapper[4667]: I0131 04:56:53.779793 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-22gm5" Jan 31 04:56:53 crc kubenswrapper[4667]: I0131 04:56:53.840580 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-22gm5" Jan 31 04:56:54 crc kubenswrapper[4667]: I0131 04:56:54.055114 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-22gm5" Jan 31 04:56:54 crc kubenswrapper[4667]: I0131 04:56:54.114794 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-22gm5"] Jan 31 04:56:55 crc kubenswrapper[4667]: I0131 04:56:55.283860 4667 scope.go:117] "RemoveContainer" containerID="3e5f360efff2cb2fbf8b3bd6a7305f45746603d4236d552d6b00acba8bf03353" Jan 31 04:56:55 crc kubenswrapper[4667]: E0131 04:56:55.284127 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:56:56 crc kubenswrapper[4667]: I0131 04:56:56.036194 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-22gm5" podUID="33fee552-706c-48a5-bc09-93a38cfc8eee" containerName="registry-server" containerID="cri-o://22a3d6af6d1ac1b2ab6601457043358b4ff1e815fd9155f4a1f8835070a1cf63" gracePeriod=2 Jan 31 04:56:56 crc kubenswrapper[4667]: I0131 04:56:56.940363 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-22gm5" Jan 31 04:56:56 crc kubenswrapper[4667]: I0131 04:56:56.990066 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33fee552-706c-48a5-bc09-93a38cfc8eee-utilities\") pod \"33fee552-706c-48a5-bc09-93a38cfc8eee\" (UID: \"33fee552-706c-48a5-bc09-93a38cfc8eee\") " Jan 31 04:56:56 crc kubenswrapper[4667]: I0131 04:56:56.990270 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l58sr\" (UniqueName: \"kubernetes.io/projected/33fee552-706c-48a5-bc09-93a38cfc8eee-kube-api-access-l58sr\") pod \"33fee552-706c-48a5-bc09-93a38cfc8eee\" (UID: \"33fee552-706c-48a5-bc09-93a38cfc8eee\") " Jan 31 04:56:56 crc kubenswrapper[4667]: I0131 04:56:56.990347 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33fee552-706c-48a5-bc09-93a38cfc8eee-catalog-content\") pod \"33fee552-706c-48a5-bc09-93a38cfc8eee\" (UID: \"33fee552-706c-48a5-bc09-93a38cfc8eee\") " Jan 31 04:56:56 crc kubenswrapper[4667]: I0131 04:56:56.994734 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33fee552-706c-48a5-bc09-93a38cfc8eee-utilities" (OuterVolumeSpecName: "utilities") pod "33fee552-706c-48a5-bc09-93a38cfc8eee" (UID: "33fee552-706c-48a5-bc09-93a38cfc8eee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:56:57 crc kubenswrapper[4667]: I0131 04:56:57.000151 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33fee552-706c-48a5-bc09-93a38cfc8eee-kube-api-access-l58sr" (OuterVolumeSpecName: "kube-api-access-l58sr") pod "33fee552-706c-48a5-bc09-93a38cfc8eee" (UID: "33fee552-706c-48a5-bc09-93a38cfc8eee"). InnerVolumeSpecName "kube-api-access-l58sr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 04:56:57 crc kubenswrapper[4667]: I0131 04:56:57.044037 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33fee552-706c-48a5-bc09-93a38cfc8eee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "33fee552-706c-48a5-bc09-93a38cfc8eee" (UID: "33fee552-706c-48a5-bc09-93a38cfc8eee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 04:56:57 crc kubenswrapper[4667]: I0131 04:56:57.044319 4667 generic.go:334] "Generic (PLEG): container finished" podID="33fee552-706c-48a5-bc09-93a38cfc8eee" containerID="22a3d6af6d1ac1b2ab6601457043358b4ff1e815fd9155f4a1f8835070a1cf63" exitCode=0 Jan 31 04:56:57 crc kubenswrapper[4667]: I0131 04:56:57.044357 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-22gm5" event={"ID":"33fee552-706c-48a5-bc09-93a38cfc8eee","Type":"ContainerDied","Data":"22a3d6af6d1ac1b2ab6601457043358b4ff1e815fd9155f4a1f8835070a1cf63"} Jan 31 04:56:57 crc kubenswrapper[4667]: I0131 04:56:57.044381 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-22gm5" event={"ID":"33fee552-706c-48a5-bc09-93a38cfc8eee","Type":"ContainerDied","Data":"239568186f0577a40fdf29f75e896ce0de33a1f9a3c8d9fc4675b426e51a8008"} Jan 31 04:56:57 crc kubenswrapper[4667]: I0131 04:56:57.044397 4667 scope.go:117] "RemoveContainer" containerID="22a3d6af6d1ac1b2ab6601457043358b4ff1e815fd9155f4a1f8835070a1cf63" Jan 31 04:56:57 crc kubenswrapper[4667]: I0131 04:56:57.044441 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-22gm5" Jan 31 04:56:57 crc kubenswrapper[4667]: I0131 04:56:57.065128 4667 scope.go:117] "RemoveContainer" containerID="f4e17f3632c9200deb3489a2664e4e318e10fdba63371cca05a5cc3e5d64c241" Jan 31 04:56:57 crc kubenswrapper[4667]: I0131 04:56:57.084787 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-22gm5"] Jan 31 04:56:57 crc kubenswrapper[4667]: I0131 04:56:57.091947 4667 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33fee552-706c-48a5-bc09-93a38cfc8eee-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 04:56:57 crc kubenswrapper[4667]: I0131 04:56:57.091976 4667 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33fee552-706c-48a5-bc09-93a38cfc8eee-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 04:56:57 crc kubenswrapper[4667]: I0131 04:56:57.091987 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l58sr\" (UniqueName: \"kubernetes.io/projected/33fee552-706c-48a5-bc09-93a38cfc8eee-kube-api-access-l58sr\") on node \"crc\" DevicePath \"\"" Jan 31 04:56:57 crc kubenswrapper[4667]: I0131 04:56:57.092416 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-22gm5"] Jan 31 04:56:57 crc kubenswrapper[4667]: I0131 04:56:57.099131 4667 scope.go:117] "RemoveContainer" containerID="3c0e50faac915dfb64a7a70eb931c1d9069abf2ddb63e174ebc1eaf9edb6f4d4" Jan 31 04:56:57 crc kubenswrapper[4667]: I0131 04:56:57.126582 4667 scope.go:117] "RemoveContainer" containerID="22a3d6af6d1ac1b2ab6601457043358b4ff1e815fd9155f4a1f8835070a1cf63" Jan 31 04:56:57 crc kubenswrapper[4667]: E0131 04:56:57.127116 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22a3d6af6d1ac1b2ab6601457043358b4ff1e815fd9155f4a1f8835070a1cf63\": container with ID starting with 22a3d6af6d1ac1b2ab6601457043358b4ff1e815fd9155f4a1f8835070a1cf63 not found: ID does not exist" containerID="22a3d6af6d1ac1b2ab6601457043358b4ff1e815fd9155f4a1f8835070a1cf63" Jan 31 04:56:57 crc kubenswrapper[4667]: I0131 04:56:57.127162 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22a3d6af6d1ac1b2ab6601457043358b4ff1e815fd9155f4a1f8835070a1cf63"} err="failed to get container status \"22a3d6af6d1ac1b2ab6601457043358b4ff1e815fd9155f4a1f8835070a1cf63\": rpc error: code = NotFound desc = could not find container \"22a3d6af6d1ac1b2ab6601457043358b4ff1e815fd9155f4a1f8835070a1cf63\": container with ID starting with 22a3d6af6d1ac1b2ab6601457043358b4ff1e815fd9155f4a1f8835070a1cf63 not found: ID does not exist" Jan 31 04:56:57 crc kubenswrapper[4667]: I0131 04:56:57.127189 4667 scope.go:117] "RemoveContainer" containerID="f4e17f3632c9200deb3489a2664e4e318e10fdba63371cca05a5cc3e5d64c241" Jan 31 04:56:57 crc kubenswrapper[4667]: E0131 04:56:57.127491 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4e17f3632c9200deb3489a2664e4e318e10fdba63371cca05a5cc3e5d64c241\": container with ID starting with f4e17f3632c9200deb3489a2664e4e318e10fdba63371cca05a5cc3e5d64c241 not found: ID does not exist" containerID="f4e17f3632c9200deb3489a2664e4e318e10fdba63371cca05a5cc3e5d64c241" Jan 31 04:56:57 crc kubenswrapper[4667]: I0131 04:56:57.127523 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4e17f3632c9200deb3489a2664e4e318e10fdba63371cca05a5cc3e5d64c241"} err="failed to get container status \"f4e17f3632c9200deb3489a2664e4e318e10fdba63371cca05a5cc3e5d64c241\": rpc error: code = NotFound desc = could not find container \"f4e17f3632c9200deb3489a2664e4e318e10fdba63371cca05a5cc3e5d64c241\": container with ID starting with f4e17f3632c9200deb3489a2664e4e318e10fdba63371cca05a5cc3e5d64c241 not found: ID does not exist" Jan 31 04:56:57 crc kubenswrapper[4667]: I0131 04:56:57.127547 4667 scope.go:117] "RemoveContainer" containerID="3c0e50faac915dfb64a7a70eb931c1d9069abf2ddb63e174ebc1eaf9edb6f4d4" Jan 31 04:56:57 crc kubenswrapper[4667]: E0131 04:56:57.127816 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c0e50faac915dfb64a7a70eb931c1d9069abf2ddb63e174ebc1eaf9edb6f4d4\": container with ID starting with 3c0e50faac915dfb64a7a70eb931c1d9069abf2ddb63e174ebc1eaf9edb6f4d4 not found: ID does not exist" containerID="3c0e50faac915dfb64a7a70eb931c1d9069abf2ddb63e174ebc1eaf9edb6f4d4" Jan 31 04:56:57 crc kubenswrapper[4667]: I0131 04:56:57.127871 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c0e50faac915dfb64a7a70eb931c1d9069abf2ddb63e174ebc1eaf9edb6f4d4"} err="failed to get container status \"3c0e50faac915dfb64a7a70eb931c1d9069abf2ddb63e174ebc1eaf9edb6f4d4\": rpc error: code = NotFound desc = could not find container \"3c0e50faac915dfb64a7a70eb931c1d9069abf2ddb63e174ebc1eaf9edb6f4d4\": container with ID starting with 3c0e50faac915dfb64a7a70eb931c1d9069abf2ddb63e174ebc1eaf9edb6f4d4 not found: ID does not exist" Jan 31 04:56:57 crc kubenswrapper[4667]: I0131 04:56:57.294546 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33fee552-706c-48a5-bc09-93a38cfc8eee" path="/var/lib/kubelet/pods/33fee552-706c-48a5-bc09-93a38cfc8eee/volumes" Jan 31 04:56:57 crc kubenswrapper[4667]: I0131 04:56:57.515797 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-7qjzd_20571b84-83e2-494c-b690-9d7005ef51eb/nmstate-console-plugin/0.log" Jan 31 04:56:57 crc kubenswrapper[4667]: I0131 04:56:57.686972 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-lcflt_5713803d-a7eb-4197-bed0-8cfd7112add6/nmstate-handler/0.log" Jan 31 04:56:57 crc kubenswrapper[4667]: I0131 04:56:57.755289 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-4fm4q_848d059a-1bd2-4bec-ae9b-36352c162923/kube-rbac-proxy/0.log" Jan 31 04:56:59 crc kubenswrapper[4667]: I0131 04:56:59.606387 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-8kcsw" podUID="6721fd64-d815-4fa7-8332-76eebcfad816" containerName="registry-server" probeResult="failure" output=< Jan 31 04:56:59 crc kubenswrapper[4667]: timeout: failed to connect service ":50051" within 1s Jan 31 04:56:59 crc kubenswrapper[4667]: > Jan 31 04:56:59 crc kubenswrapper[4667]: I0131 04:56:59.610072 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-8kcsw" podUID="6721fd64-d815-4fa7-8332-76eebcfad816" containerName="registry-server" probeResult="failure" output=< Jan 31 04:56:59 crc kubenswrapper[4667]: timeout: failed to connect service ":50051" within 1s Jan 31 04:56:59 crc kubenswrapper[4667]: > Jan 31 04:56:59 crc kubenswrapper[4667]: I0131 04:56:59.629593 4667 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-ms8lf container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 04:56:59 crc kubenswrapper[4667]: I0131 04:56:59.629653 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ms8lf" podUID="9af91113-a315-4416-a1f2-6566c16278cf" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 31 04:56:59 crc kubenswrapper[4667]: I0131 04:56:59.669797 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-4fm4q_848d059a-1bd2-4bec-ae9b-36352c162923/nmstate-metrics/0.log" Jan 31 04:56:59 crc kubenswrapper[4667]: I0131 04:56:59.817509 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-cjp5c_d4bb0958-e09f-488c-9d40-747ddd8ed31a/nmstate-operator/0.log" Jan 31 04:56:59 crc kubenswrapper[4667]: I0131 04:56:59.902617 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-7nqzs_454649dc-76dc-45ea-8395-90c8e06d3e2f/nmstate-webhook/0.log" Jan 31 04:57:08 crc kubenswrapper[4667]: I0131 04:57:08.281856 4667 scope.go:117] "RemoveContainer" containerID="3e5f360efff2cb2fbf8b3bd6a7305f45746603d4236d552d6b00acba8bf03353" Jan 31 04:57:08 crc kubenswrapper[4667]: E0131 04:57:08.282546 4667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9b7g_openshift-machine-config-operator(b103bbd2-fb5d-4b2a-8b01-c32f699757df)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" Jan 31 04:57:22 crc kubenswrapper[4667]: I0131 04:57:22.284977 4667 scope.go:117] "RemoveContainer" containerID="3e5f360efff2cb2fbf8b3bd6a7305f45746603d4236d552d6b00acba8bf03353" Jan 31 04:57:23 crc kubenswrapper[4667]: I0131 04:57:23.265423 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" event={"ID":"b103bbd2-fb5d-4b2a-8b01-c32f699757df","Type":"ContainerStarted","Data":"208c07984525ecdfd6411a3103fc80a1c07f0dc4eed1c9fc9d8ef17558a6cfed"} Jan 31 04:57:34 crc kubenswrapper[4667]: I0131 04:57:34.354495 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-np9hr_62ebc3a2-c4f8-4b5e-8fd7-1c462453ea77/kube-rbac-proxy/0.log" Jan 31 04:57:34 crc kubenswrapper[4667]: I0131 04:57:34.536570 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-np9hr_62ebc3a2-c4f8-4b5e-8fd7-1c462453ea77/controller/0.log" Jan 31 04:57:34 crc kubenswrapper[4667]: I0131 04:57:34.639182 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h45xh_1d6f4476-1b56-481c-b15e-ec4149642acc/cp-frr-files/0.log" Jan 31 04:57:34 crc kubenswrapper[4667]: I0131 04:57:34.913867 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h45xh_1d6f4476-1b56-481c-b15e-ec4149642acc/cp-frr-files/0.log" Jan 31 04:57:34 crc kubenswrapper[4667]: I0131 04:57:34.928683 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h45xh_1d6f4476-1b56-481c-b15e-ec4149642acc/cp-reloader/0.log" Jan 31 04:57:34 crc kubenswrapper[4667]: I0131 04:57:34.986811 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h45xh_1d6f4476-1b56-481c-b15e-ec4149642acc/cp-metrics/0.log" Jan 31 04:57:35 crc kubenswrapper[4667]: I0131 04:57:35.004613 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h45xh_1d6f4476-1b56-481c-b15e-ec4149642acc/cp-reloader/0.log" Jan 31 04:57:35 crc kubenswrapper[4667]: I0131 04:57:35.178722 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h45xh_1d6f4476-1b56-481c-b15e-ec4149642acc/cp-frr-files/0.log" Jan 31 04:57:35 crc kubenswrapper[4667]: I0131 04:57:35.184177 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h45xh_1d6f4476-1b56-481c-b15e-ec4149642acc/cp-metrics/0.log" Jan 31 04:57:35 crc kubenswrapper[4667]: I0131 04:57:35.250520 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h45xh_1d6f4476-1b56-481c-b15e-ec4149642acc/cp-reloader/0.log" Jan 31 04:57:35 crc kubenswrapper[4667]: I0131 04:57:35.261914 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h45xh_1d6f4476-1b56-481c-b15e-ec4149642acc/cp-metrics/0.log" Jan 31 04:57:35 crc kubenswrapper[4667]: I0131 04:57:35.491970 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h45xh_1d6f4476-1b56-481c-b15e-ec4149642acc/cp-metrics/0.log" Jan 31 04:57:35 crc kubenswrapper[4667]: I0131 04:57:35.500539 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h45xh_1d6f4476-1b56-481c-b15e-ec4149642acc/cp-reloader/0.log" Jan 31 04:57:35 crc kubenswrapper[4667]: I0131 04:57:35.536195 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h45xh_1d6f4476-1b56-481c-b15e-ec4149642acc/cp-frr-files/0.log" Jan 31 04:57:35 crc kubenswrapper[4667]: I0131 04:57:35.567003 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h45xh_1d6f4476-1b56-481c-b15e-ec4149642acc/controller/0.log" Jan 31 04:57:35 crc kubenswrapper[4667]: I0131 04:57:35.716978 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h45xh_1d6f4476-1b56-481c-b15e-ec4149642acc/kube-rbac-proxy/0.log" Jan 31 04:57:35 crc kubenswrapper[4667]: I0131 04:57:35.745544 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h45xh_1d6f4476-1b56-481c-b15e-ec4149642acc/frr-metrics/0.log" Jan 31 04:57:35 crc kubenswrapper[4667]: I0131 04:57:35.871174 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h45xh_1d6f4476-1b56-481c-b15e-ec4149642acc/kube-rbac-proxy-frr/0.log" Jan 31 04:57:36 crc kubenswrapper[4667]: I0131 04:57:36.000987 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h45xh_1d6f4476-1b56-481c-b15e-ec4149642acc/reloader/0.log" Jan 31 04:57:36 crc kubenswrapper[4667]: I0131 04:57:36.249352 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-7tjsc_bf8fd966-64cf-493a-b75c-2588e084afb8/frr-k8s-webhook-server/0.log" Jan 31 04:57:36 crc kubenswrapper[4667]: I0131 04:57:36.572688 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6fbb7fc476-m2zqb_22c596d0-b347-4dd0-ab61-7560ec9f5636/manager/0.log" Jan 31 04:57:36 crc kubenswrapper[4667]: I0131 04:57:36.699247 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-86d6b8c8bf-jddpp_a9fc0a54-a93e-4113-8b7c-25015ed1cb60/webhook-server/0.log" Jan 31 04:57:36 crc kubenswrapper[4667]: I0131 04:57:36.886150 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-tqnx9_a01baef7-dca0-4217-a1de-cbfcf6348664/kube-rbac-proxy/0.log" Jan 31 04:57:37 crc kubenswrapper[4667]: I0131 04:57:37.094216 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h45xh_1d6f4476-1b56-481c-b15e-ec4149642acc/frr/0.log" Jan 31 04:57:37 crc kubenswrapper[4667]: I0131 04:57:37.375690 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-tqnx9_a01baef7-dca0-4217-a1de-cbfcf6348664/speaker/0.log" Jan 31 04:57:52 crc kubenswrapper[4667]: I0131 04:57:52.270555 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc99ffw_65537ed9-39d5-40b0-82c9-a4b3d9dc6551/util/0.log" Jan 31 04:57:52 crc kubenswrapper[4667]: I0131 04:57:52.504814 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc99ffw_65537ed9-39d5-40b0-82c9-a4b3d9dc6551/util/0.log" Jan 31 04:57:52 crc kubenswrapper[4667]: I0131 04:57:52.531897 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc99ffw_65537ed9-39d5-40b0-82c9-a4b3d9dc6551/pull/0.log" Jan 31 04:57:52 crc kubenswrapper[4667]: I0131 04:57:52.597703 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc99ffw_65537ed9-39d5-40b0-82c9-a4b3d9dc6551/pull/0.log" Jan 31 04:57:52 crc kubenswrapper[4667]: I0131 04:57:52.727263 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc99ffw_65537ed9-39d5-40b0-82c9-a4b3d9dc6551/util/0.log" Jan 31 04:57:52 crc kubenswrapper[4667]: I0131 04:57:52.731489 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc99ffw_65537ed9-39d5-40b0-82c9-a4b3d9dc6551/pull/0.log" Jan 31 04:57:52 crc kubenswrapper[4667]: I0131 04:57:52.777893 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc99ffw_65537ed9-39d5-40b0-82c9-a4b3d9dc6551/extract/0.log" Jan 31 04:57:52 crc kubenswrapper[4667]: I0131 04:57:52.927503 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz_12187e5c-4ff4-4ab3-baea-3501646a5c68/util/0.log" Jan 31 04:57:53 crc kubenswrapper[4667]: I0131 04:57:53.135270 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz_12187e5c-4ff4-4ab3-baea-3501646a5c68/util/0.log" Jan 31 04:57:53 crc kubenswrapper[4667]: I0131 04:57:53.158567 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz_12187e5c-4ff4-4ab3-baea-3501646a5c68/pull/0.log" Jan 31 04:57:53 crc kubenswrapper[4667]: I0131 04:57:53.228864 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz_12187e5c-4ff4-4ab3-baea-3501646a5c68/pull/0.log" Jan 31 04:57:53 crc kubenswrapper[4667]: I0131 04:57:53.369609 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz_12187e5c-4ff4-4ab3-baea-3501646a5c68/pull/0.log" Jan 31 04:57:53 crc kubenswrapper[4667]: I0131 04:57:53.456739 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz_12187e5c-4ff4-4ab3-baea-3501646a5c68/extract/0.log" Jan 31 04:57:53 crc kubenswrapper[4667]: I0131 04:57:53.460977 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xtrgz_12187e5c-4ff4-4ab3-baea-3501646a5c68/util/0.log" Jan 31 04:57:53 crc kubenswrapper[4667]: I0131 04:57:53.645158 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8kcsw_6721fd64-d815-4fa7-8332-76eebcfad816/extract-utilities/0.log" Jan 31 04:57:53 crc kubenswrapper[4667]: I0131 04:57:53.823594 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8kcsw_6721fd64-d815-4fa7-8332-76eebcfad816/extract-utilities/0.log" Jan 31 04:57:53 crc kubenswrapper[4667]: I0131 04:57:53.858672 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8kcsw_6721fd64-d815-4fa7-8332-76eebcfad816/extract-content/0.log" Jan 31 04:57:53 crc kubenswrapper[4667]: I0131 04:57:53.881523 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8kcsw_6721fd64-d815-4fa7-8332-76eebcfad816/extract-content/0.log" Jan 31 04:57:54 crc kubenswrapper[4667]: I0131 04:57:54.075814 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8kcsw_6721fd64-d815-4fa7-8332-76eebcfad816/extract-content/0.log" Jan 31 04:57:54 crc kubenswrapper[4667]: I0131 04:57:54.076312 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8kcsw_6721fd64-d815-4fa7-8332-76eebcfad816/extract-utilities/0.log" Jan 31 04:57:54 crc kubenswrapper[4667]: I0131 04:57:54.408790 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qnlkn_b321312d-8d4b-4547-98c2-e3226cfb5dc5/extract-utilities/0.log" Jan 31 04:57:54 crc kubenswrapper[4667]: I0131 04:57:54.450975 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8kcsw_6721fd64-d815-4fa7-8332-76eebcfad816/registry-server/0.log" Jan 31 04:57:54 crc kubenswrapper[4667]: I0131 04:57:54.571064 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qnlkn_b321312d-8d4b-4547-98c2-e3226cfb5dc5/extract-utilities/0.log" Jan 31 04:57:54 crc kubenswrapper[4667]: I0131 04:57:54.613148 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qnlkn_b321312d-8d4b-4547-98c2-e3226cfb5dc5/extract-content/0.log" Jan 31 04:57:54 crc kubenswrapper[4667]: I0131 04:57:54.620033 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qnlkn_b321312d-8d4b-4547-98c2-e3226cfb5dc5/extract-content/0.log" Jan 31 04:57:54 crc kubenswrapper[4667]: I0131 04:57:54.841718 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qnlkn_b321312d-8d4b-4547-98c2-e3226cfb5dc5/extract-utilities/0.log" Jan 31 04:57:54 crc kubenswrapper[4667]: I0131 04:57:54.972008 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qnlkn_b321312d-8d4b-4547-98c2-e3226cfb5dc5/extract-content/0.log" Jan 31 04:57:55 crc kubenswrapper[4667]: I0131 04:57:55.207903 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-cq68x_eca662bd-5da4-45dd-9d55-714a74234cec/marketplace-operator/0.log" Jan 31 04:57:55 crc kubenswrapper[4667]: I0131 04:57:55.369286 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4nf68_88ba3bd3-095c-4d75-b2c7-fa72d74704ef/extract-utilities/0.log" Jan 31 04:57:55 crc kubenswrapper[4667]: I0131 04:57:55.546124 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4nf68_88ba3bd3-095c-4d75-b2c7-fa72d74704ef/extract-utilities/0.log" Jan 31 04:57:55 crc kubenswrapper[4667]: I0131 04:57:55.690906 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4nf68_88ba3bd3-095c-4d75-b2c7-fa72d74704ef/extract-content/0.log" Jan 31 04:57:55 crc kubenswrapper[4667]: I0131 04:57:55.691029 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4nf68_88ba3bd3-095c-4d75-b2c7-fa72d74704ef/extract-content/0.log" Jan 31 04:57:55 crc kubenswrapper[4667]: I0131 04:57:55.769592 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qnlkn_b321312d-8d4b-4547-98c2-e3226cfb5dc5/registry-server/0.log" Jan 31 04:57:55 crc kubenswrapper[4667]: I0131 04:57:55.941274 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4nf68_88ba3bd3-095c-4d75-b2c7-fa72d74704ef/extract-content/0.log" Jan 31 04:57:56 crc kubenswrapper[4667]: I0131 04:57:56.003304 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4nf68_88ba3bd3-095c-4d75-b2c7-fa72d74704ef/extract-utilities/0.log" Jan 31 04:57:56 crc kubenswrapper[4667]: I0131 04:57:56.193465 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4nf68_88ba3bd3-095c-4d75-b2c7-fa72d74704ef/registry-server/0.log" Jan 31 04:57:56 crc kubenswrapper[4667]: I0131 04:57:56.238317 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b72vl_7c3d3bea-fee8-4619-8daf-bef3da273e55/extract-utilities/0.log" Jan 31 04:57:56 crc kubenswrapper[4667]: I0131 04:57:56.491880 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b72vl_7c3d3bea-fee8-4619-8daf-bef3da273e55/extract-utilities/0.log" Jan 31 04:57:56 crc kubenswrapper[4667]: I0131 04:57:56.547205 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b72vl_7c3d3bea-fee8-4619-8daf-bef3da273e55/extract-content/0.log" Jan 31 04:57:56 crc kubenswrapper[4667]: I0131 04:57:56.549815 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b72vl_7c3d3bea-fee8-4619-8daf-bef3da273e55/extract-content/0.log" Jan 31 04:57:56 crc kubenswrapper[4667]: I0131 04:57:56.746104 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b72vl_7c3d3bea-fee8-4619-8daf-bef3da273e55/extract-content/0.log" Jan 31 04:57:56 crc kubenswrapper[4667]: I0131 04:57:56.751716 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b72vl_7c3d3bea-fee8-4619-8daf-bef3da273e55/extract-utilities/0.log" Jan 31 04:57:57 crc kubenswrapper[4667]: I0131 04:57:57.632923 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b72vl_7c3d3bea-fee8-4619-8daf-bef3da273e55/registry-server/0.log" Jan 31 04:59:45 crc kubenswrapper[4667]: I0131 04:59:45.704344 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 04:59:45 crc kubenswrapper[4667]: I0131 04:59:45.705353 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:00:00 crc kubenswrapper[4667]: I0131 05:00:00.216289 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497260-xnfnl"] Jan 31 05:00:00 crc kubenswrapper[4667]: E0131 05:00:00.217271 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33fee552-706c-48a5-bc09-93a38cfc8eee" containerName="extract-content" Jan 31 05:00:00 crc kubenswrapper[4667]: I0131 05:00:00.217285 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="33fee552-706c-48a5-bc09-93a38cfc8eee" containerName="extract-content" Jan 31 05:00:00 crc kubenswrapper[4667]: E0131 05:00:00.217303 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33fee552-706c-48a5-bc09-93a38cfc8eee" containerName="extract-utilities" Jan 31 05:00:00 crc kubenswrapper[4667]: I0131 05:00:00.217310 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="33fee552-706c-48a5-bc09-93a38cfc8eee" containerName="extract-utilities" Jan 31 05:00:00 crc kubenswrapper[4667]: E0131 05:00:00.217323 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33fee552-706c-48a5-bc09-93a38cfc8eee" containerName="registry-server" Jan 31 05:00:00 crc kubenswrapper[4667]: I0131 05:00:00.217330 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="33fee552-706c-48a5-bc09-93a38cfc8eee" containerName="registry-server" Jan 31 05:00:00 crc kubenswrapper[4667]: I0131 05:00:00.217527 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="33fee552-706c-48a5-bc09-93a38cfc8eee" containerName="registry-server" Jan 31 05:00:00 crc kubenswrapper[4667]: I0131 05:00:00.218221 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-xnfnl" Jan 31 05:00:00 crc kubenswrapper[4667]: I0131 05:00:00.223204 4667 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 05:00:00 crc kubenswrapper[4667]: I0131 05:00:00.223342 4667 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 05:00:00 crc kubenswrapper[4667]: I0131 05:00:00.229239 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497260-xnfnl"] Jan 31 05:00:00 crc kubenswrapper[4667]: I0131 05:00:00.412094 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a1bd471-f193-4471-8719-1865f3d8bf1c-config-volume\") pod \"collect-profiles-29497260-xnfnl\" (UID: \"1a1bd471-f193-4471-8719-1865f3d8bf1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-xnfnl" Jan 31 05:00:00 crc kubenswrapper[4667]: I0131 05:00:00.412141 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a1bd471-f193-4471-8719-1865f3d8bf1c-secret-volume\") pod \"collect-profiles-29497260-xnfnl\" (UID: \"1a1bd471-f193-4471-8719-1865f3d8bf1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-xnfnl" Jan 31 05:00:00 crc kubenswrapper[4667]: I0131 05:00:00.414018 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn984\" (UniqueName: \"kubernetes.io/projected/1a1bd471-f193-4471-8719-1865f3d8bf1c-kube-api-access-mn984\") pod \"collect-profiles-29497260-xnfnl\" (UID: \"1a1bd471-f193-4471-8719-1865f3d8bf1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-xnfnl" Jan 31 05:00:00 crc kubenswrapper[4667]: I0131 05:00:00.516528 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn984\" (UniqueName: \"kubernetes.io/projected/1a1bd471-f193-4471-8719-1865f3d8bf1c-kube-api-access-mn984\") pod \"collect-profiles-29497260-xnfnl\" (UID: \"1a1bd471-f193-4471-8719-1865f3d8bf1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-xnfnl" Jan 31 05:00:00 crc kubenswrapper[4667]: I0131 05:00:00.517155 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a1bd471-f193-4471-8719-1865f3d8bf1c-config-volume\") pod \"collect-profiles-29497260-xnfnl\" (UID: \"1a1bd471-f193-4471-8719-1865f3d8bf1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-xnfnl" Jan 31 05:00:00 crc kubenswrapper[4667]: I0131 05:00:00.517199 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a1bd471-f193-4471-8719-1865f3d8bf1c-secret-volume\") pod \"collect-profiles-29497260-xnfnl\" (UID: \"1a1bd471-f193-4471-8719-1865f3d8bf1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-xnfnl" Jan 31 05:00:00 crc kubenswrapper[4667]: I0131 05:00:00.519567 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a1bd471-f193-4471-8719-1865f3d8bf1c-config-volume\") pod \"collect-profiles-29497260-xnfnl\" (UID: \"1a1bd471-f193-4471-8719-1865f3d8bf1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-xnfnl" Jan 31 05:00:00 crc kubenswrapper[4667]: I0131 05:00:00.525021 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a1bd471-f193-4471-8719-1865f3d8bf1c-secret-volume\") pod \"collect-profiles-29497260-xnfnl\" (UID: \"1a1bd471-f193-4471-8719-1865f3d8bf1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-xnfnl" Jan 31 05:00:00 crc kubenswrapper[4667]: I0131 05:00:00.533055 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn984\" (UniqueName: \"kubernetes.io/projected/1a1bd471-f193-4471-8719-1865f3d8bf1c-kube-api-access-mn984\") pod \"collect-profiles-29497260-xnfnl\" (UID: \"1a1bd471-f193-4471-8719-1865f3d8bf1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-xnfnl" Jan 31 05:00:00 crc kubenswrapper[4667]: I0131 05:00:00.549933 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-xnfnl" Jan 31 05:00:00 crc kubenswrapper[4667]: I0131 05:00:00.984763 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497260-xnfnl"] Jan 31 05:00:01 crc kubenswrapper[4667]: I0131 05:00:01.905381 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-xnfnl" event={"ID":"1a1bd471-f193-4471-8719-1865f3d8bf1c","Type":"ContainerStarted","Data":"948f2896b53646bf99c2ccb452f0d1d62c6f704241644598055e4e97e974242b"} Jan 31 05:00:01 crc kubenswrapper[4667]: I0131 05:00:01.905664 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-xnfnl" event={"ID":"1a1bd471-f193-4471-8719-1865f3d8bf1c","Type":"ContainerStarted","Data":"f3ee01030e00eda4df358c34a8bc46c1a9dbfe05a6e422c408c1e376c1d7bc25"} Jan 31 05:00:01 crc kubenswrapper[4667]: I0131 05:00:01.934350 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-xnfnl" podStartSLOduration=1.9343248339999999 podStartE2EDuration="1.934324834s" podCreationTimestamp="2026-01-31 05:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:00:01.927785102 +0000 UTC m=+4325.444120421" watchObservedRunningTime="2026-01-31 05:00:01.934324834 +0000 UTC m=+4325.450660143" Jan 31 05:00:02 crc kubenswrapper[4667]: I0131 05:00:02.920004 4667 generic.go:334] "Generic (PLEG): container finished" podID="1a1bd471-f193-4471-8719-1865f3d8bf1c" containerID="948f2896b53646bf99c2ccb452f0d1d62c6f704241644598055e4e97e974242b" exitCode=0 Jan 31 05:00:02 crc kubenswrapper[4667]: I0131 05:00:02.920090 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-xnfnl" event={"ID":"1a1bd471-f193-4471-8719-1865f3d8bf1c","Type":"ContainerDied","Data":"948f2896b53646bf99c2ccb452f0d1d62c6f704241644598055e4e97e974242b"} Jan 31 05:00:03 crc kubenswrapper[4667]: I0131 05:00:03.936956 4667 generic.go:334] "Generic (PLEG): container finished" podID="6300166c-bced-499e-b7f3-1238570ddc71" containerID="6e51e9ecda6475a88c0a2bc4c2976367e222fb37581f05079d6875f4a56412ee" exitCode=0 Jan 31 05:00:03 crc kubenswrapper[4667]: I0131 05:00:03.937069 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rdrrt/must-gather-7sd6m" event={"ID":"6300166c-bced-499e-b7f3-1238570ddc71","Type":"ContainerDied","Data":"6e51e9ecda6475a88c0a2bc4c2976367e222fb37581f05079d6875f4a56412ee"} Jan 31 05:00:03 crc kubenswrapper[4667]: I0131 05:00:03.938098 4667 scope.go:117] "RemoveContainer" containerID="6e51e9ecda6475a88c0a2bc4c2976367e222fb37581f05079d6875f4a56412ee" Jan 31 05:00:04 crc kubenswrapper[4667]: I0131 05:00:04.334051 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-xnfnl" Jan 31 05:00:04 crc kubenswrapper[4667]: I0131 05:00:04.456163 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rdrrt_must-gather-7sd6m_6300166c-bced-499e-b7f3-1238570ddc71/gather/0.log" Jan 31 05:00:04 crc kubenswrapper[4667]: I0131 05:00:04.502654 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a1bd471-f193-4471-8719-1865f3d8bf1c-secret-volume\") pod \"1a1bd471-f193-4471-8719-1865f3d8bf1c\" (UID: \"1a1bd471-f193-4471-8719-1865f3d8bf1c\") " Jan 31 05:00:04 crc kubenswrapper[4667]: I0131 05:00:04.502745 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn984\" (UniqueName: \"kubernetes.io/projected/1a1bd471-f193-4471-8719-1865f3d8bf1c-kube-api-access-mn984\") pod \"1a1bd471-f193-4471-8719-1865f3d8bf1c\" (UID: \"1a1bd471-f193-4471-8719-1865f3d8bf1c\") " Jan 31 05:00:04 crc kubenswrapper[4667]: I0131 05:00:04.502926 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a1bd471-f193-4471-8719-1865f3d8bf1c-config-volume\") pod \"1a1bd471-f193-4471-8719-1865f3d8bf1c\" (UID: \"1a1bd471-f193-4471-8719-1865f3d8bf1c\") " Jan 31 05:00:04 crc kubenswrapper[4667]: I0131 05:00:04.503538 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a1bd471-f193-4471-8719-1865f3d8bf1c-config-volume" (OuterVolumeSpecName: "config-volume") pod "1a1bd471-f193-4471-8719-1865f3d8bf1c" (UID: "1a1bd471-f193-4471-8719-1865f3d8bf1c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 05:00:04 crc kubenswrapper[4667]: I0131 05:00:04.521721 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a1bd471-f193-4471-8719-1865f3d8bf1c-kube-api-access-mn984" (OuterVolumeSpecName: "kube-api-access-mn984") pod "1a1bd471-f193-4471-8719-1865f3d8bf1c" (UID: "1a1bd471-f193-4471-8719-1865f3d8bf1c"). InnerVolumeSpecName "kube-api-access-mn984". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:00:04 crc kubenswrapper[4667]: I0131 05:00:04.522913 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a1bd471-f193-4471-8719-1865f3d8bf1c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1a1bd471-f193-4471-8719-1865f3d8bf1c" (UID: "1a1bd471-f193-4471-8719-1865f3d8bf1c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:00:04 crc kubenswrapper[4667]: I0131 05:00:04.605663 4667 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a1bd471-f193-4471-8719-1865f3d8bf1c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:04 crc kubenswrapper[4667]: I0131 05:00:04.605688 4667 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a1bd471-f193-4471-8719-1865f3d8bf1c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:04 crc kubenswrapper[4667]: I0131 05:00:04.605700 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mn984\" (UniqueName: \"kubernetes.io/projected/1a1bd471-f193-4471-8719-1865f3d8bf1c-kube-api-access-mn984\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:04 crc kubenswrapper[4667]: I0131 05:00:04.958275 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-xnfnl" event={"ID":"1a1bd471-f193-4471-8719-1865f3d8bf1c","Type":"ContainerDied","Data":"f3ee01030e00eda4df358c34a8bc46c1a9dbfe05a6e422c408c1e376c1d7bc25"} Jan 31 05:00:04 crc kubenswrapper[4667]: I0131 05:00:04.958701 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3ee01030e00eda4df358c34a8bc46c1a9dbfe05a6e422c408c1e376c1d7bc25" Jan 31 05:00:04 crc kubenswrapper[4667]: I0131 05:00:04.958325 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497260-xnfnl" Jan 31 05:00:05 crc kubenswrapper[4667]: I0131 05:00:05.083957 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497215-ts69z"] Jan 31 05:00:05 crc kubenswrapper[4667]: I0131 05:00:05.099993 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497215-ts69z"] Jan 31 05:00:05 crc kubenswrapper[4667]: I0131 05:00:05.304014 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd5c864b-24e1-4c2d-86bf-a3b030fc98ab" path="/var/lib/kubelet/pods/fd5c864b-24e1-4c2d-86bf-a3b030fc98ab/volumes" Jan 31 05:00:15 crc kubenswrapper[4667]: I0131 05:00:15.704229 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:00:15 crc kubenswrapper[4667]: I0131 05:00:15.704895 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:00:15 crc kubenswrapper[4667]: I0131 05:00:15.707354 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rdrrt/must-gather-7sd6m"] Jan 31 05:00:15 crc kubenswrapper[4667]: I0131 05:00:15.707608 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-rdrrt/must-gather-7sd6m" podUID="6300166c-bced-499e-b7f3-1238570ddc71" containerName="copy" containerID="cri-o://d8709268d29d67488b93f0bf3fda1bf23882e929ca9a67cda4ecd2d49c808cc1" gracePeriod=2 Jan 31 05:00:15 crc kubenswrapper[4667]: I0131 05:00:15.714867 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rdrrt/must-gather-7sd6m"] Jan 31 05:00:16 crc kubenswrapper[4667]: I0131 05:00:16.121124 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rdrrt_must-gather-7sd6m_6300166c-bced-499e-b7f3-1238570ddc71/copy/0.log" Jan 31 05:00:16 crc kubenswrapper[4667]: I0131 05:00:16.121510 4667 generic.go:334] "Generic (PLEG): container finished" podID="6300166c-bced-499e-b7f3-1238570ddc71" containerID="d8709268d29d67488b93f0bf3fda1bf23882e929ca9a67cda4ecd2d49c808cc1" exitCode=143 Jan 31 05:00:16 crc kubenswrapper[4667]: I0131 05:00:16.258465 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lvwtx"] Jan 31 05:00:16 crc kubenswrapper[4667]: E0131 05:00:16.259218 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6300166c-bced-499e-b7f3-1238570ddc71" containerName="copy" Jan 31 05:00:16 crc kubenswrapper[4667]: I0131 05:00:16.259236 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="6300166c-bced-499e-b7f3-1238570ddc71" containerName="copy" Jan 31 05:00:16 crc kubenswrapper[4667]: E0131 05:00:16.259258 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6300166c-bced-499e-b7f3-1238570ddc71" containerName="gather" Jan 31 05:00:16 crc kubenswrapper[4667]: I0131 05:00:16.259263 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="6300166c-bced-499e-b7f3-1238570ddc71" containerName="gather" Jan 31 05:00:16 crc kubenswrapper[4667]: E0131 05:00:16.259284 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a1bd471-f193-4471-8719-1865f3d8bf1c" containerName="collect-profiles" Jan 31 05:00:16 crc kubenswrapper[4667]: I0131 05:00:16.259291 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a1bd471-f193-4471-8719-1865f3d8bf1c" containerName="collect-profiles" Jan 31 05:00:16 crc kubenswrapper[4667]: I0131 05:00:16.259446 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="6300166c-bced-499e-b7f3-1238570ddc71" containerName="copy" Jan 31 05:00:16 crc kubenswrapper[4667]: I0131 05:00:16.259457 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="6300166c-bced-499e-b7f3-1238570ddc71" containerName="gather" Jan 31 05:00:16 crc kubenswrapper[4667]: I0131 05:00:16.259470 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a1bd471-f193-4471-8719-1865f3d8bf1c" containerName="collect-profiles" Jan 31 05:00:16 crc kubenswrapper[4667]: I0131 05:00:16.260847 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lvwtx" Jan 31 05:00:16 crc kubenswrapper[4667]: I0131 05:00:16.269054 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lvwtx"] Jan 31 05:00:16 crc kubenswrapper[4667]: I0131 05:00:16.407044 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rdrrt_must-gather-7sd6m_6300166c-bced-499e-b7f3-1238570ddc71/copy/0.log" Jan 31 05:00:16 crc kubenswrapper[4667]: I0131 05:00:16.407451 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rdrrt/must-gather-7sd6m" Jan 31 05:00:16 crc kubenswrapper[4667]: I0131 05:00:16.426511 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq7cn\" (UniqueName: \"kubernetes.io/projected/d89114ed-816f-438e-8420-76150bbe787e-kube-api-access-zq7cn\") pod \"redhat-marketplace-lvwtx\" (UID: \"d89114ed-816f-438e-8420-76150bbe787e\") " pod="openshift-marketplace/redhat-marketplace-lvwtx" Jan 31 05:00:16 crc kubenswrapper[4667]: I0131 05:00:16.426632 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d89114ed-816f-438e-8420-76150bbe787e-utilities\") pod \"redhat-marketplace-lvwtx\" (UID: \"d89114ed-816f-438e-8420-76150bbe787e\") " pod="openshift-marketplace/redhat-marketplace-lvwtx" Jan 31 05:00:16 crc kubenswrapper[4667]: I0131 05:00:16.427077 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d89114ed-816f-438e-8420-76150bbe787e-catalog-content\") pod \"redhat-marketplace-lvwtx\" (UID: \"d89114ed-816f-438e-8420-76150bbe787e\") " pod="openshift-marketplace/redhat-marketplace-lvwtx" Jan 31 05:00:16 crc kubenswrapper[4667]: I0131 05:00:16.528412 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzfq9\" (UniqueName: \"kubernetes.io/projected/6300166c-bced-499e-b7f3-1238570ddc71-kube-api-access-pzfq9\") pod \"6300166c-bced-499e-b7f3-1238570ddc71\" (UID: \"6300166c-bced-499e-b7f3-1238570ddc71\") " Jan 31 05:00:16 crc kubenswrapper[4667]: I0131 05:00:16.528755 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6300166c-bced-499e-b7f3-1238570ddc71-must-gather-output\") pod \"6300166c-bced-499e-b7f3-1238570ddc71\" (UID: \"6300166c-bced-499e-b7f3-1238570ddc71\") " Jan 31 05:00:16 crc kubenswrapper[4667]: I0131 05:00:16.528976 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d89114ed-816f-438e-8420-76150bbe787e-utilities\") pod \"redhat-marketplace-lvwtx\" (UID: \"d89114ed-816f-438e-8420-76150bbe787e\") " pod="openshift-marketplace/redhat-marketplace-lvwtx" Jan 31 05:00:16 crc kubenswrapper[4667]: I0131 05:00:16.529124 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d89114ed-816f-438e-8420-76150bbe787e-catalog-content\") pod \"redhat-marketplace-lvwtx\" (UID: \"d89114ed-816f-438e-8420-76150bbe787e\") " pod="openshift-marketplace/redhat-marketplace-lvwtx" Jan 31 05:00:16 crc kubenswrapper[4667]: I0131 05:00:16.529238 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq7cn\" (UniqueName: \"kubernetes.io/projected/d89114ed-816f-438e-8420-76150bbe787e-kube-api-access-zq7cn\") pod \"redhat-marketplace-lvwtx\" (UID: \"d89114ed-816f-438e-8420-76150bbe787e\") " pod="openshift-marketplace/redhat-marketplace-lvwtx" Jan 31 05:00:16 crc kubenswrapper[4667]: I0131 05:00:16.529829 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d89114ed-816f-438e-8420-76150bbe787e-utilities\") pod \"redhat-marketplace-lvwtx\" (UID: \"d89114ed-816f-438e-8420-76150bbe787e\") " pod="openshift-marketplace/redhat-marketplace-lvwtx" Jan 31 05:00:16 crc kubenswrapper[4667]: I0131 05:00:16.529909 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d89114ed-816f-438e-8420-76150bbe787e-catalog-content\") pod \"redhat-marketplace-lvwtx\" (UID: \"d89114ed-816f-438e-8420-76150bbe787e\") " pod="openshift-marketplace/redhat-marketplace-lvwtx" Jan 31 05:00:16 crc kubenswrapper[4667]: I0131 05:00:16.533716 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6300166c-bced-499e-b7f3-1238570ddc71-kube-api-access-pzfq9" (OuterVolumeSpecName: "kube-api-access-pzfq9") pod "6300166c-bced-499e-b7f3-1238570ddc71" (UID: "6300166c-bced-499e-b7f3-1238570ddc71"). InnerVolumeSpecName "kube-api-access-pzfq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:00:16 crc kubenswrapper[4667]: I0131 05:00:16.551773 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq7cn\" (UniqueName: \"kubernetes.io/projected/d89114ed-816f-438e-8420-76150bbe787e-kube-api-access-zq7cn\") pod \"redhat-marketplace-lvwtx\" (UID: \"d89114ed-816f-438e-8420-76150bbe787e\") " pod="openshift-marketplace/redhat-marketplace-lvwtx" Jan 31 05:00:16 crc kubenswrapper[4667]: I0131 05:00:16.593751 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lvwtx" Jan 31 05:00:16 crc kubenswrapper[4667]: I0131 05:00:16.630984 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzfq9\" (UniqueName: \"kubernetes.io/projected/6300166c-bced-499e-b7f3-1238570ddc71-kube-api-access-pzfq9\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:16 crc kubenswrapper[4667]: I0131 05:00:16.733809 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6300166c-bced-499e-b7f3-1238570ddc71-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "6300166c-bced-499e-b7f3-1238570ddc71" (UID: "6300166c-bced-499e-b7f3-1238570ddc71"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:00:16 crc kubenswrapper[4667]: I0131 05:00:16.835873 4667 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6300166c-bced-499e-b7f3-1238570ddc71-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:17 crc kubenswrapper[4667]: I0131 05:00:17.102616 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lvwtx"] Jan 31 05:00:17 crc kubenswrapper[4667]: I0131 05:00:17.128937 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lvwtx" event={"ID":"d89114ed-816f-438e-8420-76150bbe787e","Type":"ContainerStarted","Data":"fa0cf2cce8a8539b9161a0910f9c17f68f1fc1bf00fe7a1aa8340658eeb97526"} Jan 31 05:00:17 crc kubenswrapper[4667]: I0131 05:00:17.131371 4667 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rdrrt_must-gather-7sd6m_6300166c-bced-499e-b7f3-1238570ddc71/copy/0.log" Jan 31 05:00:17 crc kubenswrapper[4667]: I0131 05:00:17.131735 4667 scope.go:117] "RemoveContainer" containerID="d8709268d29d67488b93f0bf3fda1bf23882e929ca9a67cda4ecd2d49c808cc1" Jan 31 05:00:17 crc kubenswrapper[4667]: I0131 05:00:17.131757 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rdrrt/must-gather-7sd6m" Jan 31 05:00:17 crc kubenswrapper[4667]: I0131 05:00:17.225133 4667 scope.go:117] "RemoveContainer" containerID="6e51e9ecda6475a88c0a2bc4c2976367e222fb37581f05079d6875f4a56412ee" Jan 31 05:00:17 crc kubenswrapper[4667]: I0131 05:00:17.293330 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6300166c-bced-499e-b7f3-1238570ddc71" path="/var/lib/kubelet/pods/6300166c-bced-499e-b7f3-1238570ddc71/volumes" Jan 31 05:00:18 crc kubenswrapper[4667]: I0131 05:00:18.145733 4667 generic.go:334] "Generic (PLEG): container finished" podID="d89114ed-816f-438e-8420-76150bbe787e" containerID="76f24731458ab714a762f9b231c20c0480cb25b24e7aaa1e487b5eea8fef2f99" exitCode=0 Jan 31 05:00:18 crc kubenswrapper[4667]: I0131 05:00:18.145782 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lvwtx" event={"ID":"d89114ed-816f-438e-8420-76150bbe787e","Type":"ContainerDied","Data":"76f24731458ab714a762f9b231c20c0480cb25b24e7aaa1e487b5eea8fef2f99"} Jan 31 05:00:18 crc kubenswrapper[4667]: I0131 05:00:18.147783 4667 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 05:00:19 crc kubenswrapper[4667]: I0131 05:00:19.157797 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lvwtx" event={"ID":"d89114ed-816f-438e-8420-76150bbe787e","Type":"ContainerStarted","Data":"e97271c21248b16c624d1c184c030eaa4fca4a1f4c43dc501a3a8d7b5b504cf2"} Jan 31 05:00:20 crc kubenswrapper[4667]: I0131 05:00:20.189012 4667 generic.go:334] "Generic (PLEG): container finished" podID="d89114ed-816f-438e-8420-76150bbe787e" containerID="e97271c21248b16c624d1c184c030eaa4fca4a1f4c43dc501a3a8d7b5b504cf2" exitCode=0 Jan 31 05:00:20 crc kubenswrapper[4667]: I0131 05:00:20.190143 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lvwtx" event={"ID":"d89114ed-816f-438e-8420-76150bbe787e","Type":"ContainerDied","Data":"e97271c21248b16c624d1c184c030eaa4fca4a1f4c43dc501a3a8d7b5b504cf2"} Jan 31 05:00:21 crc kubenswrapper[4667]: I0131 05:00:21.200457 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lvwtx" event={"ID":"d89114ed-816f-438e-8420-76150bbe787e","Type":"ContainerStarted","Data":"87ca35f3057473e48fc95df43a36bda36e6b3521723eb68a2563b6c113c7665f"} Jan 31 05:00:21 crc kubenswrapper[4667]: I0131 05:00:21.221022 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lvwtx" podStartSLOduration=2.595556933 podStartE2EDuration="5.221002689s" podCreationTimestamp="2026-01-31 05:00:16 +0000 UTC" firstStartedPulling="2026-01-31 05:00:18.147561891 +0000 UTC m=+4341.663897180" lastFinishedPulling="2026-01-31 05:00:20.773007597 +0000 UTC m=+4344.289342936" observedRunningTime="2026-01-31 05:00:21.219200652 +0000 UTC m=+4344.735535971" watchObservedRunningTime="2026-01-31 05:00:21.221002689 +0000 UTC m=+4344.737337988" Jan 31 05:00:26 crc kubenswrapper[4667]: I0131 05:00:26.594446 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lvwtx" Jan 31 05:00:26 crc kubenswrapper[4667]: I0131 05:00:26.595347 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lvwtx" Jan 31 05:00:26 crc kubenswrapper[4667]: I0131 05:00:26.672011 4667 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lvwtx" Jan 31 05:00:27 crc kubenswrapper[4667]: I0131 05:00:27.027556 4667 scope.go:117] "RemoveContainer" containerID="f734554215f06de504d368806cbfa4c8abea481e547c0599ce87adc52bc8f8c0" Jan 31 05:00:27 crc kubenswrapper[4667]: I0131 05:00:27.338549 4667 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lvwtx" Jan 31 05:00:27 crc kubenswrapper[4667]: I0131 05:00:27.399697 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lvwtx"] Jan 31 05:00:29 crc kubenswrapper[4667]: I0131 05:00:29.290302 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lvwtx" podUID="d89114ed-816f-438e-8420-76150bbe787e" containerName="registry-server" containerID="cri-o://87ca35f3057473e48fc95df43a36bda36e6b3521723eb68a2563b6c113c7665f" gracePeriod=2 Jan 31 05:00:29 crc kubenswrapper[4667]: I0131 05:00:29.768267 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lvwtx" Jan 31 05:00:29 crc kubenswrapper[4667]: I0131 05:00:29.926727 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d89114ed-816f-438e-8420-76150bbe787e-utilities\") pod \"d89114ed-816f-438e-8420-76150bbe787e\" (UID: \"d89114ed-816f-438e-8420-76150bbe787e\") " Jan 31 05:00:29 crc kubenswrapper[4667]: I0131 05:00:29.926859 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zq7cn\" (UniqueName: \"kubernetes.io/projected/d89114ed-816f-438e-8420-76150bbe787e-kube-api-access-zq7cn\") pod \"d89114ed-816f-438e-8420-76150bbe787e\" (UID: \"d89114ed-816f-438e-8420-76150bbe787e\") " Jan 31 05:00:29 crc kubenswrapper[4667]: I0131 05:00:29.926955 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d89114ed-816f-438e-8420-76150bbe787e-catalog-content\") pod \"d89114ed-816f-438e-8420-76150bbe787e\" (UID: \"d89114ed-816f-438e-8420-76150bbe787e\") " Jan 31 05:00:29 crc kubenswrapper[4667]: I0131 05:00:29.928745 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d89114ed-816f-438e-8420-76150bbe787e-utilities" (OuterVolumeSpecName: "utilities") pod "d89114ed-816f-438e-8420-76150bbe787e" (UID: "d89114ed-816f-438e-8420-76150bbe787e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:00:29 crc kubenswrapper[4667]: I0131 05:00:29.954907 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d89114ed-816f-438e-8420-76150bbe787e-kube-api-access-zq7cn" (OuterVolumeSpecName: "kube-api-access-zq7cn") pod "d89114ed-816f-438e-8420-76150bbe787e" (UID: "d89114ed-816f-438e-8420-76150bbe787e"). InnerVolumeSpecName "kube-api-access-zq7cn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:00:30 crc kubenswrapper[4667]: I0131 05:00:30.028940 4667 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d89114ed-816f-438e-8420-76150bbe787e-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:30 crc kubenswrapper[4667]: I0131 05:00:30.028974 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zq7cn\" (UniqueName: \"kubernetes.io/projected/d89114ed-816f-438e-8420-76150bbe787e-kube-api-access-zq7cn\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:30 crc kubenswrapper[4667]: I0131 05:00:30.052301 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d89114ed-816f-438e-8420-76150bbe787e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d89114ed-816f-438e-8420-76150bbe787e" (UID: "d89114ed-816f-438e-8420-76150bbe787e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 05:00:30 crc kubenswrapper[4667]: I0131 05:00:30.131133 4667 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d89114ed-816f-438e-8420-76150bbe787e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 05:00:30 crc kubenswrapper[4667]: I0131 05:00:30.304232 4667 generic.go:334] "Generic (PLEG): container finished" podID="d89114ed-816f-438e-8420-76150bbe787e" containerID="87ca35f3057473e48fc95df43a36bda36e6b3521723eb68a2563b6c113c7665f" exitCode=0 Jan 31 05:00:30 crc kubenswrapper[4667]: I0131 05:00:30.304315 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lvwtx" Jan 31 05:00:30 crc kubenswrapper[4667]: I0131 05:00:30.304337 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lvwtx" event={"ID":"d89114ed-816f-438e-8420-76150bbe787e","Type":"ContainerDied","Data":"87ca35f3057473e48fc95df43a36bda36e6b3521723eb68a2563b6c113c7665f"} Jan 31 05:00:30 crc kubenswrapper[4667]: I0131 05:00:30.304671 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lvwtx" event={"ID":"d89114ed-816f-438e-8420-76150bbe787e","Type":"ContainerDied","Data":"fa0cf2cce8a8539b9161a0910f9c17f68f1fc1bf00fe7a1aa8340658eeb97526"} Jan 31 05:00:30 crc kubenswrapper[4667]: I0131 05:00:30.304698 4667 scope.go:117] "RemoveContainer" containerID="87ca35f3057473e48fc95df43a36bda36e6b3521723eb68a2563b6c113c7665f" Jan 31 05:00:30 crc kubenswrapper[4667]: I0131 05:00:30.341283 4667 scope.go:117] "RemoveContainer" containerID="e97271c21248b16c624d1c184c030eaa4fca4a1f4c43dc501a3a8d7b5b504cf2" Jan 31 05:00:30 crc kubenswrapper[4667]: I0131 05:00:30.367951 4667 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lvwtx"] Jan 31 05:00:30 crc kubenswrapper[4667]: I0131 05:00:30.392159 4667 scope.go:117] "RemoveContainer" containerID="76f24731458ab714a762f9b231c20c0480cb25b24e7aaa1e487b5eea8fef2f99" Jan 31 05:00:30 crc kubenswrapper[4667]: I0131 05:00:30.395439 4667 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lvwtx"] Jan 31 05:00:30 crc kubenswrapper[4667]: I0131 05:00:30.419156 4667 scope.go:117] "RemoveContainer" containerID="87ca35f3057473e48fc95df43a36bda36e6b3521723eb68a2563b6c113c7665f" Jan 31 05:00:30 crc kubenswrapper[4667]: E0131 05:00:30.419578 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87ca35f3057473e48fc95df43a36bda36e6b3521723eb68a2563b6c113c7665f\": container with ID starting with 87ca35f3057473e48fc95df43a36bda36e6b3521723eb68a2563b6c113c7665f not found: ID does not exist" containerID="87ca35f3057473e48fc95df43a36bda36e6b3521723eb68a2563b6c113c7665f" Jan 31 05:00:30 crc kubenswrapper[4667]: I0131 05:00:30.419628 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87ca35f3057473e48fc95df43a36bda36e6b3521723eb68a2563b6c113c7665f"} err="failed to get container status \"87ca35f3057473e48fc95df43a36bda36e6b3521723eb68a2563b6c113c7665f\": rpc error: code = NotFound desc = could not find container \"87ca35f3057473e48fc95df43a36bda36e6b3521723eb68a2563b6c113c7665f\": container with ID starting with 87ca35f3057473e48fc95df43a36bda36e6b3521723eb68a2563b6c113c7665f not found: ID does not exist" Jan 31 05:00:30 crc kubenswrapper[4667]: I0131 05:00:30.419661 4667 scope.go:117] "RemoveContainer" containerID="e97271c21248b16c624d1c184c030eaa4fca4a1f4c43dc501a3a8d7b5b504cf2" Jan 31 05:00:30 crc kubenswrapper[4667]: E0131 05:00:30.420179 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e97271c21248b16c624d1c184c030eaa4fca4a1f4c43dc501a3a8d7b5b504cf2\": container with ID starting with e97271c21248b16c624d1c184c030eaa4fca4a1f4c43dc501a3a8d7b5b504cf2 not found: ID does not exist" containerID="e97271c21248b16c624d1c184c030eaa4fca4a1f4c43dc501a3a8d7b5b504cf2" Jan 31 05:00:30 crc kubenswrapper[4667]: I0131 05:00:30.420210 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e97271c21248b16c624d1c184c030eaa4fca4a1f4c43dc501a3a8d7b5b504cf2"} err="failed to get container status \"e97271c21248b16c624d1c184c030eaa4fca4a1f4c43dc501a3a8d7b5b504cf2\": rpc error: code = NotFound desc = could not find container \"e97271c21248b16c624d1c184c030eaa4fca4a1f4c43dc501a3a8d7b5b504cf2\": container with ID starting with e97271c21248b16c624d1c184c030eaa4fca4a1f4c43dc501a3a8d7b5b504cf2 not found: ID does not exist" Jan 31 05:00:30 crc kubenswrapper[4667]: I0131 05:00:30.420232 4667 scope.go:117] "RemoveContainer" containerID="76f24731458ab714a762f9b231c20c0480cb25b24e7aaa1e487b5eea8fef2f99" Jan 31 05:00:30 crc kubenswrapper[4667]: E0131 05:00:30.420725 4667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76f24731458ab714a762f9b231c20c0480cb25b24e7aaa1e487b5eea8fef2f99\": container with ID starting with 76f24731458ab714a762f9b231c20c0480cb25b24e7aaa1e487b5eea8fef2f99 not found: ID does not exist" containerID="76f24731458ab714a762f9b231c20c0480cb25b24e7aaa1e487b5eea8fef2f99" Jan 31 05:00:30 crc kubenswrapper[4667]: I0131 05:00:30.420793 4667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76f24731458ab714a762f9b231c20c0480cb25b24e7aaa1e487b5eea8fef2f99"} err="failed to get container status \"76f24731458ab714a762f9b231c20c0480cb25b24e7aaa1e487b5eea8fef2f99\": rpc error: code = NotFound desc = could not find container \"76f24731458ab714a762f9b231c20c0480cb25b24e7aaa1e487b5eea8fef2f99\": container with ID starting with 76f24731458ab714a762f9b231c20c0480cb25b24e7aaa1e487b5eea8fef2f99 not found: ID does not exist" Jan 31 05:00:31 crc kubenswrapper[4667]: I0131 05:00:31.304494 4667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d89114ed-816f-438e-8420-76150bbe787e" path="/var/lib/kubelet/pods/d89114ed-816f-438e-8420-76150bbe787e/volumes" Jan 31 05:00:45 crc kubenswrapper[4667]: I0131 05:00:45.704258 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:00:45 crc kubenswrapper[4667]: I0131 05:00:45.704916 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 05:00:45 crc kubenswrapper[4667]: I0131 05:00:45.704982 4667 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" Jan 31 05:00:45 crc kubenswrapper[4667]: I0131 05:00:45.705810 4667 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"208c07984525ecdfd6411a3103fc80a1c07f0dc4eed1c9fc9d8ef17558a6cfed"} pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 05:00:45 crc kubenswrapper[4667]: I0131 05:00:45.705929 4667 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" containerID="cri-o://208c07984525ecdfd6411a3103fc80a1c07f0dc4eed1c9fc9d8ef17558a6cfed" gracePeriod=600 Jan 31 05:00:46 crc kubenswrapper[4667]: I0131 05:00:46.487659 4667 generic.go:334] "Generic (PLEG): container finished" podID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerID="208c07984525ecdfd6411a3103fc80a1c07f0dc4eed1c9fc9d8ef17558a6cfed" exitCode=0 Jan 31 05:00:46 crc kubenswrapper[4667]: I0131 05:00:46.487879 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" event={"ID":"b103bbd2-fb5d-4b2a-8b01-c32f699757df","Type":"ContainerDied","Data":"208c07984525ecdfd6411a3103fc80a1c07f0dc4eed1c9fc9d8ef17558a6cfed"} Jan 31 05:00:46 crc kubenswrapper[4667]: I0131 05:00:46.487957 4667 scope.go:117] "RemoveContainer" containerID="3e5f360efff2cb2fbf8b3bd6a7305f45746603d4236d552d6b00acba8bf03353" Jan 31 05:00:47 crc kubenswrapper[4667]: I0131 05:00:47.501169 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" event={"ID":"b103bbd2-fb5d-4b2a-8b01-c32f699757df","Type":"ContainerStarted","Data":"7b0e9b0bc385bb0dbdd3ecbfd5273eb2412d3b22ba8e983f91cf7e3debfb69b1"} Jan 31 05:01:00 crc kubenswrapper[4667]: I0131 05:01:00.161996 4667 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29497261-nqv45"] Jan 31 05:01:00 crc kubenswrapper[4667]: E0131 05:01:00.168044 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d89114ed-816f-438e-8420-76150bbe787e" containerName="extract-content" Jan 31 05:01:00 crc kubenswrapper[4667]: I0131 05:01:00.168094 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="d89114ed-816f-438e-8420-76150bbe787e" containerName="extract-content" Jan 31 05:01:00 crc kubenswrapper[4667]: E0131 05:01:00.168115 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d89114ed-816f-438e-8420-76150bbe787e" containerName="extract-utilities" Jan 31 05:01:00 crc kubenswrapper[4667]: I0131 05:01:00.168123 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="d89114ed-816f-438e-8420-76150bbe787e" containerName="extract-utilities" Jan 31 05:01:00 crc kubenswrapper[4667]: E0131 05:01:00.168148 4667 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d89114ed-816f-438e-8420-76150bbe787e" containerName="registry-server" Jan 31 05:01:00 crc kubenswrapper[4667]: I0131 05:01:00.168156 4667 state_mem.go:107] "Deleted CPUSet assignment" podUID="d89114ed-816f-438e-8420-76150bbe787e" containerName="registry-server" Jan 31 05:01:00 crc kubenswrapper[4667]: I0131 05:01:00.168546 4667 memory_manager.go:354] "RemoveStaleState removing state" podUID="d89114ed-816f-438e-8420-76150bbe787e" containerName="registry-server" Jan 31 05:01:00 crc kubenswrapper[4667]: I0131 05:01:00.169450 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29497261-nqv45" Jan 31 05:01:00 crc kubenswrapper[4667]: I0131 05:01:00.176552 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29497261-nqv45"] Jan 31 05:01:00 crc kubenswrapper[4667]: I0131 05:01:00.210771 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed-config-data\") pod \"keystone-cron-29497261-nqv45\" (UID: \"1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed\") " pod="openstack/keystone-cron-29497261-nqv45" Jan 31 05:01:00 crc kubenswrapper[4667]: I0131 05:01:00.210881 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slg82\" (UniqueName: \"kubernetes.io/projected/1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed-kube-api-access-slg82\") pod \"keystone-cron-29497261-nqv45\" (UID: \"1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed\") " pod="openstack/keystone-cron-29497261-nqv45" Jan 31 05:01:00 crc kubenswrapper[4667]: I0131 05:01:00.211202 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed-fernet-keys\") pod \"keystone-cron-29497261-nqv45\" (UID: \"1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed\") " pod="openstack/keystone-cron-29497261-nqv45" Jan 31 05:01:00 crc kubenswrapper[4667]: I0131 05:01:00.211328 4667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed-combined-ca-bundle\") pod \"keystone-cron-29497261-nqv45\" (UID: \"1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed\") " pod="openstack/keystone-cron-29497261-nqv45" Jan 31 05:01:00 crc kubenswrapper[4667]: I0131 05:01:00.313628 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slg82\" (UniqueName: \"kubernetes.io/projected/1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed-kube-api-access-slg82\") pod \"keystone-cron-29497261-nqv45\" (UID: \"1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed\") " pod="openstack/keystone-cron-29497261-nqv45" Jan 31 05:01:00 crc kubenswrapper[4667]: I0131 05:01:00.314046 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed-fernet-keys\") pod \"keystone-cron-29497261-nqv45\" (UID: \"1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed\") " pod="openstack/keystone-cron-29497261-nqv45" Jan 31 05:01:00 crc kubenswrapper[4667]: I0131 05:01:00.314084 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed-combined-ca-bundle\") pod \"keystone-cron-29497261-nqv45\" (UID: \"1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed\") " pod="openstack/keystone-cron-29497261-nqv45" Jan 31 05:01:00 crc kubenswrapper[4667]: I0131 05:01:00.314165 4667 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed-config-data\") pod \"keystone-cron-29497261-nqv45\" (UID: \"1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed\") " pod="openstack/keystone-cron-29497261-nqv45" Jan 31 05:01:00 crc kubenswrapper[4667]: I0131 05:01:00.321413 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed-config-data\") pod \"keystone-cron-29497261-nqv45\" (UID: \"1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed\") " pod="openstack/keystone-cron-29497261-nqv45" Jan 31 05:01:00 crc kubenswrapper[4667]: I0131 05:01:00.322652 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed-combined-ca-bundle\") pod \"keystone-cron-29497261-nqv45\" (UID: \"1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed\") " pod="openstack/keystone-cron-29497261-nqv45" Jan 31 05:01:00 crc kubenswrapper[4667]: I0131 05:01:00.333731 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed-fernet-keys\") pod \"keystone-cron-29497261-nqv45\" (UID: \"1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed\") " pod="openstack/keystone-cron-29497261-nqv45" Jan 31 05:01:00 crc kubenswrapper[4667]: I0131 05:01:00.340103 4667 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slg82\" (UniqueName: \"kubernetes.io/projected/1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed-kube-api-access-slg82\") pod \"keystone-cron-29497261-nqv45\" (UID: \"1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed\") " pod="openstack/keystone-cron-29497261-nqv45" Jan 31 05:01:00 crc kubenswrapper[4667]: I0131 05:01:00.505189 4667 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29497261-nqv45" Jan 31 05:01:00 crc kubenswrapper[4667]: I0131 05:01:00.998341 4667 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29497261-nqv45"] Jan 31 05:01:01 crc kubenswrapper[4667]: I0131 05:01:01.649342 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29497261-nqv45" event={"ID":"1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed","Type":"ContainerStarted","Data":"2057464338deb15f20dabe6bea59bbd776276da1d8f8684649c29318c584fa27"} Jan 31 05:01:01 crc kubenswrapper[4667]: I0131 05:01:01.649918 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29497261-nqv45" event={"ID":"1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed","Type":"ContainerStarted","Data":"4935c7db08f2eb44cfe929401d5c4077a7e12c774f82733b794bd0baf465a7d8"} Jan 31 05:01:01 crc kubenswrapper[4667]: I0131 05:01:01.680322 4667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29497261-nqv45" podStartSLOduration=1.6803010550000002 podStartE2EDuration="1.680301055s" podCreationTimestamp="2026-01-31 05:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 05:01:01.676380662 +0000 UTC m=+4385.192715971" watchObservedRunningTime="2026-01-31 05:01:01.680301055 +0000 UTC m=+4385.196636374" Jan 31 05:01:05 crc kubenswrapper[4667]: I0131 05:01:05.683336 4667 generic.go:334] "Generic (PLEG): container finished" podID="1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed" containerID="2057464338deb15f20dabe6bea59bbd776276da1d8f8684649c29318c584fa27" exitCode=0 Jan 31 05:01:05 crc kubenswrapper[4667]: I0131 05:01:05.683558 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29497261-nqv45" event={"ID":"1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed","Type":"ContainerDied","Data":"2057464338deb15f20dabe6bea59bbd776276da1d8f8684649c29318c584fa27"} Jan 31 05:01:07 crc kubenswrapper[4667]: I0131 05:01:07.105612 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29497261-nqv45" Jan 31 05:01:07 crc kubenswrapper[4667]: I0131 05:01:07.154925 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed-config-data\") pod \"1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed\" (UID: \"1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed\") " Jan 31 05:01:07 crc kubenswrapper[4667]: I0131 05:01:07.154976 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed-fernet-keys\") pod \"1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed\" (UID: \"1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed\") " Jan 31 05:01:07 crc kubenswrapper[4667]: I0131 05:01:07.155077 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed-combined-ca-bundle\") pod \"1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed\" (UID: \"1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed\") " Jan 31 05:01:07 crc kubenswrapper[4667]: I0131 05:01:07.155177 4667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slg82\" (UniqueName: \"kubernetes.io/projected/1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed-kube-api-access-slg82\") pod \"1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed\" (UID: \"1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed\") " Jan 31 05:01:07 crc kubenswrapper[4667]: I0131 05:01:07.162994 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed" (UID: "1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:01:07 crc kubenswrapper[4667]: I0131 05:01:07.163058 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed-kube-api-access-slg82" (OuterVolumeSpecName: "kube-api-access-slg82") pod "1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed" (UID: "1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed"). InnerVolumeSpecName "kube-api-access-slg82". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 05:01:07 crc kubenswrapper[4667]: I0131 05:01:07.187409 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed" (UID: "1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:01:07 crc kubenswrapper[4667]: I0131 05:01:07.206247 4667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed-config-data" (OuterVolumeSpecName: "config-data") pod "1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed" (UID: "1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 05:01:07 crc kubenswrapper[4667]: I0131 05:01:07.256800 4667 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slg82\" (UniqueName: \"kubernetes.io/projected/1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed-kube-api-access-slg82\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:07 crc kubenswrapper[4667]: I0131 05:01:07.256830 4667 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:07 crc kubenswrapper[4667]: I0131 05:01:07.256853 4667 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:07 crc kubenswrapper[4667]: I0131 05:01:07.256862 4667 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 05:01:07 crc kubenswrapper[4667]: I0131 05:01:07.703825 4667 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29497261-nqv45" event={"ID":"1a7bcdb4-ffcd-4e69-b4a7-cc53ee9178ed","Type":"ContainerDied","Data":"4935c7db08f2eb44cfe929401d5c4077a7e12c774f82733b794bd0baf465a7d8"} Jan 31 05:01:07 crc kubenswrapper[4667]: I0131 05:01:07.703920 4667 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29497261-nqv45" Jan 31 05:01:07 crc kubenswrapper[4667]: I0131 05:01:07.703946 4667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4935c7db08f2eb44cfe929401d5c4077a7e12c774f82733b794bd0baf465a7d8" Jan 31 05:01:22 crc kubenswrapper[4667]: I0131 05:01:22.851475 4667 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-8bff87d99-j8cd2" podUID="30fc5b26-45dd-42f8-9a58-7ba07c5aa56a" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 31 05:03:15 crc kubenswrapper[4667]: I0131 05:03:15.704635 4667 patch_prober.go:28] interesting pod/machine-config-daemon-j9b7g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 05:03:15 crc kubenswrapper[4667]: I0131 05:03:15.706057 4667 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9b7g" podUID="b103bbd2-fb5d-4b2a-8b01-c32f699757df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"